[jira] [Commented] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15270200#comment-15270200
 ] 

Hudson commented on HADOOP-12101:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9711 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9711/])
HADOOP-12101. Add automatic search of default Configuration variables to 
(iwasakims: rev 355325bcc7111fa4aac801fd23a26422ffabaf7c)
* dev-support/verify-xml.sh
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java


> Add automatic search of default Configuration variables to 
> TestConfigurationFieldsBase
> --
>
> Key: HADOOP-12101
> URL: https://issues.apache.org/jira/browse/HADOOP-12101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: HADOOP-12101.001.patch, HADOOP-12101.002.patch, 
> HADOOP-12101.003.patch, HADOOP-12101.004.patch, HADOOP-12101.005.patch, 
> HADOOP-12101.006.patch, HADOOP-12101.007.patch, HADOOP-12101.008.patch, 
> HADOOP-12101.009.patch, HADOOP-12101.010.patch, HADOOP-12101.011.patch, 
> HADOOP-12101.012.patch, HADOOP-12101.013.patch, HADOOP-12101.014.patch, 
> HADOOP-12101.015.patch, HADOOP-12101.016.patch
>
>
> Add functionality given a Configuration variable FOO, to at least check the 
> xml file value against DEFAULT_FOO.
> Without waivers and a mapping for exceptions, this can probably never be a 
> test method that generates actual errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-03 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15270173#comment-15270173
 ] 

Masatake Iwasaki commented on HADOOP-12101:
---

+1, committing this.

> Add automatic search of default Configuration variables to 
> TestConfigurationFieldsBase
> --
>
> Key: HADOOP-12101
> URL: https://issues.apache.org/jira/browse/HADOOP-12101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: HADOOP-12101.001.patch, HADOOP-12101.002.patch, 
> HADOOP-12101.003.patch, HADOOP-12101.004.patch, HADOOP-12101.005.patch, 
> HADOOP-12101.006.patch, HADOOP-12101.007.patch, HADOOP-12101.008.patch, 
> HADOOP-12101.009.patch, HADOOP-12101.010.patch, HADOOP-12101.011.patch, 
> HADOOP-12101.012.patch, HADOOP-12101.013.patch, HADOOP-12101.014.patch, 
> HADOOP-12101.015.patch, HADOOP-12101.016.patch
>
>
> Add functionality given a Configuration variable FOO, to at least check the 
> xml file value against DEFAULT_FOO.
> Without waivers and a mapping for exceptions, this can probably never be a 
> test method that generates actual errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15270154#comment-15270154
 ] 

John Zhuge commented on HADOOP-13079:
-

Target 3.0.0 for now.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13079:

Target Version/s: 3.0.0

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13079:

Affects Version/s: 2.6.0

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15270053#comment-15270053
 ] 

Hadoop QA commented on HADOOP-13065:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 33s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 28s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 9s 
{color} | {color:red} root: The patch generated 33 new + 208 unchanged - 1 
fixed = 241 total (was 209) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 51s 
{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 26s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 18s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 54s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color}

[jira] [Updated] (HADOOP-12936) modify hadoop-tools to take advantage of dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12936:
--
Status: Patch Available  (was: Open)

> modify hadoop-tools to take advantage of dynamic subcommands
> 
>
> Key: HADOOP-12936
> URL: https://issues.apache.org/jira/browse/HADOOP-12936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12936-HADOOP-12930.00.patch
>
>
> Currently, hadoop-tools content is hard-coded in the various other parts of 
> hadoop.  It should really be dynamic, such that if hadoop-tools hasn't been 
> installed, it shouldn't appear to be available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12936) modify hadoop-tools to take advantage of dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12936:
--
Attachment: HADOOP-12936-HADOOP-12930.00.patch

> modify hadoop-tools to take advantage of dynamic subcommands
> 
>
> Key: HADOOP-12936
> URL: https://issues.apache.org/jira/browse/HADOOP-12936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12936-HADOOP-12930.00.patch
>
>
> Currently, hadoop-tools content is hard-coded in the various other parts of 
> hadoop.  It should really be dynamic, such that if hadoop-tools hasn't been 
> installed, it shouldn't appear to be available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12469) distcp should not ignore the ignoreFailures option

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15270001#comment-15270001
 ] 

Hadoop QA commented on HADOOP-12469:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 37s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 9s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 51s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802090/HADOOP-12469.006.patch
 |
| JIRA Issue | HADOOP-12469 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 680afd66f367 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6d77d6e |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /us

[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269997#comment-15269997
 ] 

Andrew Wang commented on HADOOP-13079:
--

FWIW I think FreeBSD and OpenBSD default to printing "?" rather than the 
control character:

http://www.freebsd.org/cgi/man.cgi?ls
http://man.openbsd.org/OpenBSD-current/man1/ls.1

{quote}
 -q  Force printing of non-graphic characters in file names as the
 character `?'; this is the default when output is to a terminal.
{quote}

Allen, do you have some counter-examples where someone coming from a NIXy 
background would be confused by -q as default? IIRC OSX uses a FreeBSD 
userspace, so it seems like for the majority of Hadoop users, -q is already the 
expectation.

John, do you mind setting the target version? If we have compatibility 
concerns, maybe this only targets Hadoop 3.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12469) distcp should not ignore the ignoreFailures option

2016-05-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12469:
---
Attachment: HADOOP-12469.006.patch

Thanks [~jingzhao] for the comments. The v6 patch is to address this.

> distcp should not ignore the ignoreFailures option
> --
>
> Key: HADOOP-12469
> URL: https://issues.apache.org/jira/browse/HADOOP-12469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Gera Shegalov
>Assignee: Mingliang Liu
>Priority: Critical
> Attachments: HADOOP-12469.000.patch, HADOOP-12469.001.patch, 
> HADOOP-12469.002.patch, HADOOP-12469.003.patch, HADOOP-12469.004.patch, 
> HADOOP-12469.005.patch, HADOOP-12469.005.patch, HADOOP-12469.006.patch
>
>
> {{RetriableFileCopyCommand.CopyReadException}} is double-wrapped via
> # via {{RetriableCommand::execute}}
> # via {{CopyMapper#copyFileWithRetry}}
> before {{CopyMapper::handleFailure}} tests 
> {code}
> if (ignoreFailures && exception.getCause() instanceof
> RetriableFileCopyCommand.CopyReadException
> {code}
> which is always false.
> Orthogonally, ignoring failures should be mutually exclusive with the atomic 
> option otherwise an incomplete dir is eligible for commit defeating the 
> purpose.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-03 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269925#comment-15269925
 ] 

Mingliang Liu commented on HADOOP-13065:


Hi [~cmccabe], I was addressing the initial use case following the newly 
designed API. The two questions baffle me about the v8 patch. 1) How to 
maintain shared op->count storage statistic for all file system objects and 
threads.  I think our use case does not need the per-FileSystem stats like S3A. 
My first idea was to register a single instance to the global storage 
statistics. 2) How to implement the single counter for each operation. As we 
need the atomic increment support among threads, I'm wondering how the 
{{volatile long}} comes into play. I agree with your previous comment that the 
thread local implementation is not ideal for this use case as the RPC call will 
generally dominate the total overhead anyway. If true, an AtomicLong would work 
just fine.

Do you have any quick comments about this? Thanks.

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HDFS-10175.000.patch, HDFS-10175.001.patch, HDFS-10175.002.patch, 
> HDFS-10175.003.patch, HDFS-10175.004.patch, HDFS-10175.005.patch, 
> HDFS-10175.006.patch, TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8065) distcp should have an option to compress data while copying.

2016-05-03 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269857#comment-15269857
 ] 

Ravi Prakash commented on HADOOP-8065:
--

Thanks for the patch [~snayakm]! Here are some of my thoughts:

# What users seem to want, is to be able to compress data *during transit*. 
{color:red}*This patch does not enable compression of data during 
transit.*{color} Distcp is simply an MR job where maps are reading from a 
"source" . If the source does not support compressing the data before putting 
it on the network, I don't see how we could achieve what these users want.
# *We are simply enabling users to avoid a post-processing step to compress the 
data they have already transferred*. This too is a noble goal if it makes the 
lives of users easier IMHO. It also reduces the amount of space needed on the 
target filesystem. We should rewrite the JIRA summary to be more explicit if 
that is the stated goal.

Reviewing the patch:
# Do you really need the changes in {{CopyMapper}}?
# Nit: {{getCompressionCodcec}} is misspelt
# Instead of {code}  e.printStackTrace();
  LOG.error("Compression class " + compressionCodecClass
  + " not found in classpath");{code} you can simply pass {{e}} as a 
second argument to the LOG.error method.
# With this patch, we'll end up creating an instance of a Codec for every file. 
Do you think we could utilize something like 
{{org.apache.hadoop.io.compress.CodecPool}}?
# Perhaps we can add an option {{-compressOutput}} which defaults to some codec?
# Although its conceivable that we may want to decompress before writing to the 
target filesystem, we can punt that to another JIRA.

Thanks for your efforts! :-)

> distcp should have an option to compress data while copying.
> 
>
> Key: HADOOP-8065
> URL: https://issues.apache.org/jira/browse/HADOOP-8065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.20.2
>Reporter: Suresh Antony
>Assignee: Suraj Nayak
>Priority: Minor
>  Labels: distcp
> Fix For: 0.20.2
>
> Attachments: HADOOP-8065-trunk_2015-11-03.patch, 
> HADOOP-8065-trunk_2015-11-04.patch, HADOOP-8065-trunk_2016-04-29-4.patch, 
> patch.distcp.2012-02-10
>
>
> We would like compress the data while transferring from our source system to 
> target system. One way to do this is to write a map/reduce job to compress 
> that after/before being transferred. This looks inefficient. 
> Since distcp already reading writing data it would be better if it can 
> accomplish while doing this. 
> Flip side of this is that distcp -update option can not check file size 
> before copying data. It can only check for the existence of file. 
> So I propose if -compress option is given then file size is not checked.
> Also when we copy file appropriate extension needs to be added to file 
> depending on compression type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12469) distcp should not ignore the ignoreFailures option

2016-05-03 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269816#comment-15269816
 ] 

Jing Zhao commented on HADOOP-12469:


Instead of replacing the old unit test (which deletes some source files to 
generate failures), maybe we can add a new unit test. +1 after addressing the 
comment.

> distcp should not ignore the ignoreFailures option
> --
>
> Key: HADOOP-12469
> URL: https://issues.apache.org/jira/browse/HADOOP-12469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Gera Shegalov
>Assignee: Mingliang Liu
>Priority: Critical
> Attachments: HADOOP-12469.000.patch, HADOOP-12469.001.patch, 
> HADOOP-12469.002.patch, HADOOP-12469.003.patch, HADOOP-12469.004.patch, 
> HADOOP-12469.005.patch, HADOOP-12469.005.patch
>
>
> {{RetriableFileCopyCommand.CopyReadException}} is double-wrapped via
> # via {{RetriableCommand::execute}}
> # via {{CopyMapper#copyFileWithRetry}}
> before {{CopyMapper::handleFailure}} tests 
> {code}
> if (ignoreFailures && exception.getCause() instanceof
> RetriableFileCopyCommand.CopyReadException
> {code}
> which is always false.
> Orthogonally, ignoring failures should be mutually exclusive with the atomic 
> option otherwise an incomplete dir is eligible for commit defeating the 
> purpose.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13065:
---
Attachment: HADOOP-13065.008.patch

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HDFS-10175.000.patch, HDFS-10175.001.patch, HDFS-10175.002.patch, 
> HDFS-10175.003.patch, HDFS-10175.004.patch, HDFS-10175.005.patch, 
> HDFS-10175.006.patch, TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13083) The number of javadocs warnings is limited to 100

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269761#comment-15269761
 ] 

Hadoop QA commented on HADOOP-13083:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 5s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 6s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801760/YARN-4978.001.patch |
| JIRA Issue | HADOOP-13083 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 0ec5fe45f61d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ed54f5f |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9267/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/Pr

[jira] [Commented] (HADOOP-13083) The number of javadocs warnings is limited to 100

2016-05-03 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269724#comment-15269724
 ] 

Li Lu commented on HADOOP-13083:


Moved the patch to HADOOP. The fix looks good to me but I would like to check 
if this will break anything on common and/or HDFS. 

> The number of javadocs warnings is limited to 100 
> --
>
> Key: HADOOP-13083
> URL: https://issues.apache.org/jira/browse/HADOOP-13083
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Gergely Novák
> Attachments: YARN-4978.001.patch
>
>
> We are generating a lot of javadoc warnings with jdk 1.8. Right now the 
> number is limited to 100. Enlarge this limitation can probably reveal more 
> problems in one batch for our javadoc generation process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-13083) The number of javadocs warnings is limited to 100

2016-05-03 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu moved YARN-4978 to HADOOP-13083:
--

Key: HADOOP-13083  (was: YARN-4978)
Project: Hadoop Common  (was: Hadoop YARN)

> The number of javadocs warnings is limited to 100 
> --
>
> Key: HADOOP-13083
> URL: https://issues.apache.org/jira/browse/HADOOP-13083
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Gergely Novák
> Attachments: YARN-4978.001.patch
>
>
> We are generating a lot of javadoc warnings with jdk 1.8. Right now the 
> number is limited to 100. Enlarge this limitation can probably reveal more 
> problems in one batch for our javadoc generation process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269621#comment-15269621
 ] 

Hadoop QA commented on HADOOP-12291:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
33s {color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 39s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
32s {color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 30s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 49s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 33s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802016/HADOOP-12291.003.patch
 |
| JIRA Issue | HADOOP-12291 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 6fe9a702ed30 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /test

[jira] [Updated] (HADOOP-12936) modify hadoop-tools to take advantage of dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12936:
--
Description: Currently, hadoop-tools content is hard-coded in the various 
other parts of hadoop.  It should really be dynamic, such that if hadoop-tools 
hasn't been installed, it shouldn't appear to be available.

> modify hadoop-tools to take advantage of dynamic subcommands
> 
>
> Key: HADOOP-12936
> URL: https://issues.apache.org/jira/browse/HADOOP-12936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> Currently, hadoop-tools content is hard-coded in the various other parts of 
> hadoop.  It should really be dynamic, such that if hadoop-tools hasn't been 
> installed, it shouldn't appear to be available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12936) modify hadoop-tools to take advantage of dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12936:
-

Assignee: Allen Wittenauer

> modify hadoop-tools to take advantage of dynamic subcommands
> 
>
> Key: HADOOP-12936
> URL: https://issues.apache.org/jira/browse/HADOOP-12936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12932) bin/yarn work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12932:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12930
   Status: Resolved  (was: Patch Available)

> bin/yarn work for dynamic subcommands
> -
>
> Key: HADOOP-12932
> URL: https://issues.apache.org/jira/browse/HADOOP-12932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: HADOOP-12930
>
> Attachments: HADOOP-12932-HADOOP-12930.00.patch, 
> HADOOP-12932-HADOOP-12930.01.patch
>
>
> Do the necessary plumbing to enable yarn_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13018) Make Kdiag check whether hadoop.token.files points to existent and valid files

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269565#comment-15269565
 ] 

Hadoop QA commented on HADOOP-13018:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
53s {color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
7 new + 80 unchanged - 0 fixed = 87 total (was 80) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 50s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_92. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 0s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_92 Failed junit tests | hadoop.security.TestKDiagNoKDC |
|   | hadoop.ipc.TestRPCWaitForProxy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802006/HADOOP-13018.03.patch 
|
| JIRA Issue | HADOOP-13018 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 52ea76702e48 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86

[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269527#comment-15269527
 ] 

Colin Patrick McCabe commented on HADOOP-13079:
---

bq. Yup, I can't think of why -q should be the default either... but more 
importantly, neither could POSIX to the point that it demanded the standard 
have -q be the default.

Please do not misquote what I said.  I was arguing that echoing control 
characters to the terminal should not be the default behavior.  You are arguing 
the opposite.

bq. ... until such a point that they print the filename to the screen to show 
what files are being processed. At which point this change has accomplished 
absolutely nothing. Changing ls is security theater.

There are a lot of scripts that interact with HDFS via FsShell.  These scripts 
will never "print the filename to the screen" or if they do, it will be a 
filename that they got from {{ls}} itself which does not contain control 
characters.

I could come up with examples of how this is helpful all day if needed.  Here's 
another one: Some sysadmin logs in and does an {{hadoop fs -ls}} of a directory 
created by {{\$BADGUY}}. Should the filename be able use control characters to 
hijack the admin's GNU screen session and execute arbitrary code?  I would say 
no, what do you say?

bq. Are we going to change cat too?

Most system administrators will not {{cat}} a file without checking what type 
it is.  It is well-known that catting an unknown file could mess up the 
terminal.  On the other hand, most system administrators do not think that 
running {{ls}} on a directory could be a security risk.  Linux and other well 
known operating systems also do not protect users from this, so there are no 
pre-existing expectations of protection.

bq. Then stop bringing up (traditional) UNIX if you feel it isn't relevant and 
especially when you've used the term incorrectly.

There are a huge number of sysadmins who grew up with the GNU tools, which do 
have the behavior we're describing here.  It's a powerful argument for 
implementing that behavior.  When you add the fact that it fixes security 
vulnerabilities, it's an extremely compelling argument.

I think it's clear that this change does have a big positive effect in many 
scenarios, does fix real-world security flaws, and does accord with the 
expectations of most system administrators.  That's three powerful reasons to 
do it.  I can find no valid counter-argument for any of these reasons anywhere 
in these comments.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12932) bin/yarn work for dynamic subcommands

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269487#comment-15269487
 ] 

Hadoop QA commented on HADOOP-12932:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
12s {color} | {color:green} The patch generated 0 new + 84 unchanged - 6 fixed 
= 84 total (was 90) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
37s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 39s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802025/HADOOP-12932-HADOOP-12930.01.patch
 |
| JIRA Issue | HADOOP-12932 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 942a405210d8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12930 / 3035ee2 |
| shellcheck | v0.4.3 |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9266/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> bin/yarn work for dynamic subcommands
> -
>
> Key: HADOOP-12932
> URL: https://issues.apache.org/jira/browse/HADOOP-12932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12932-HADOOP-12930.00.patch, 
> HADOOP-12932-HADOOP-12930.01.patch
>
>
> Do the necessary plumbing to enable yarn_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12932) bin/yarn work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12932:
--
Attachment: HADOOP-12932-HADOOP-12930.01.patch

> bin/yarn work for dynamic subcommands
> -
>
> Key: HADOOP-12932
> URL: https://issues.apache.org/jira/browse/HADOOP-12932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12932-HADOOP-12930.00.patch, 
> HADOOP-12932-HADOOP-12930.01.patch
>
>
> Do the necessary plumbing to enable yarn_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12932) bin/yarn work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12932:
--
Attachment: (was: HADOOP-12932-HADOOP-12930.01.patch)

> bin/yarn work for dynamic subcommands
> -
>
> Key: HADOOP-12932
> URL: https://issues.apache.org/jira/browse/HADOOP-12932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12932-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable yarn_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12932) bin/yarn work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12932:
--
Attachment: HADOOP-12932-HADOOP-12930.01.patch

> bin/yarn work for dynamic subcommands
> -
>
> Key: HADOOP-12932
> URL: https://issues.apache.org/jira/browse/HADOOP-12932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12932-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable yarn_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12933) bin/hdfs work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12933:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12930
   Status: Resolved  (was: Patch Available)

> bin/hdfs work for dynamic subcommands
> -
>
> Key: HADOOP-12933
> URL: https://issues.apache.org/jira/browse/HADOOP-12933
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: HADOOP-12930
>
> Attachments: HADOOP-12933-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable hdfs_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12933) bin/hdfs work for dynamic subcommands

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269433#comment-15269433
 ] 

Hadoop QA commented on HADOOP-12933:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
40s {color} | {color:green} HADOOP-12930 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} HADOOP-12930 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
11s {color} | {color:green} The patch generated 0 new + 95 unchanged - 1 fixed 
= 95 total (was 96) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 23s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 4s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801994/HADOOP-12933-HADOOP-12930.00.patch
 |
| JIRA Issue | HADOOP-12933 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux b236121a2e3d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12930 / 7972093 |
| shellcheck | v0.4.3 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9262/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9262/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> bin/hdfs work for dynamic subcommands
> -
>
> Key: HADOOP-12933
> URL: https://issues.apache.org/jira/browse/HADOOP-12933
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12933-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable hdfs_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12932) bin/yarn work for dynamic subcommands

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269425#comment-15269425
 ] 

Hadoop QA commented on HADOOP-12932:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 13s 
{color} | {color:red} The patch generated 1 new + 94 unchanged - 1 fixed = 95 
total (was 95) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
38s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 49s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802000/HADOOP-12932-HADOOP-12930.00.patch
 |
| JIRA Issue | HADOOP-12932 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 751b122b8e19 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12930 / 7972093 |
| shellcheck | v0.4.3 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9264/artifact/patchprocess/diff-patch-shellcheck.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9264/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> bin/yarn work for dynamic subcommands
> -
>
> Key: HADOOP-12932
> URL: https://issues.apache.org/jira/browse/HADOOP-12932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12932-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable yarn_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269416#comment-15269416
 ] 

Allen Wittenauer commented on HADOOP-13079:
---

bq. OK, so Linux is technically a UNIX-like system rather than a licensee of 
the UNIX trademark. I don't feel that this is relevant to the discussion here.

Then stop bringing up (traditional) UNIX if you feel it isn't relevant and 
especially when you've used the term incorrectly.

bq.  Dumping control characters out on an interactive terminal is a security 
vulnerability

It is, but changing ls' behavior isn't going to fix that vulnerability.  
There's a reason why all of those links up there you quoted talk about 
terminals and terminal emulation and what is actually vulnerable without 
identifying specific vulnerabilities in specific commands.

bq.  I can't think of a single reason why we would want this to be the default.

Yup, I can't think of why -q should be the default either... but more 
importantly, neither could POSIX to the point that it demanded the standard 
have -q be the default. 

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12934) bin/mapred work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12934:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12930
   Status: Resolved  (was: Patch Available)

> bin/mapred work for dynamic subcommands
> ---
>
> Key: HADOOP-12934
> URL: https://issues.apache.org/jira/browse/HADOOP-12934
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: HADOOP-12930
>
> Attachments: HADOOP-12934-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable mapred_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12934) bin/mapred work for dynamic subcommands

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269403#comment-15269403
 ] 

Hadoop QA commented on HADOOP-12934:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
12s {color} | {color:green} The patch generated 0 new + 90 unchanged - 5 fixed 
= 90 total (was 95) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
39s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 47s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802004/HADOOP-12934-HADOOP-12930.00.patch
 |
| JIRA Issue | HADOOP-12934 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux eb5219f46e8b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12930 / 7972093 |
| shellcheck | v0.4.3 |
| modules | C: hadoop-mapreduce-project U: hadoop-mapreduce-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9261/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> bin/mapred work for dynamic subcommands
> ---
>
> Key: HADOOP-12934
> URL: https://issues.apache.org/jira/browse/HADOOP-12934
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12934-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable mapred_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269389#comment-15269389
 ] 

Allen Wittenauer commented on HADOOP-13079:
---

Are we going to change cat too?

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269381#comment-15269381
 ] 

Allen Wittenauer commented on HADOOP-13079:
---

... until such a point that they print the filename to the screen to show what 
files are being processed. At which point this change has accomplished 
absolutely nothing.  Changing ls is security theater.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269375#comment-15269375
 ] 

Colin Patrick McCabe commented on HADOOP-13079:
---

OK, so Linux is technically a UNIX-like system rather than a licensee of the 
UNIX trademark.  I don't feel that this is relevant to the discussion here.  I 
feel like you are just being pedantic.  Linux's behavior is still the one that 
most people compare our behavior to, whether we like it or not.  And Linux's 
behavior is to hide control characters by default in ls.

More importantly, Linux's behavior makes more sense than the other behavior you 
are suggesting.  Dumping control characters out on an interactive terminal is a 
security vulnerability as well as a giant annoyance.  I can't think of a single 
reason why we would want this to be the default.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-03 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin updated HADOOP-12291:
---
Attachment: HADOOP-12291.003.patch

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-03 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin updated HADOOP-12291:
---
Status: Patch Available  (was: In Progress)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269343#comment-15269343
 ] 

John Zhuge commented on HADOOP-13079:
-

[~aw], you mentioned "default -q" will break stuff, do you have any use case or 
test case in mind? I can only see potential problems in Expect or something 
similar when they might parse {{dfs -ls}} output in the terminal mode. All 
scripts that redirect {{dfs -ls}} stdout should be ok with "default -q" 
behavior.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12934) bin/mapred work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12934:
--
Attachment: HADOOP-12934-HADOOP-12930.00.patch

> bin/mapred work for dynamic subcommands
> ---
>
> Key: HADOOP-12934
> URL: https://issues.apache.org/jira/browse/HADOOP-12934
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12934-HADOOP-12930.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13018) Make Kdiag check whether hadoop.token.files points to existent and valid files

2016-05-03 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13018:
--
Attachment: HADOOP-13018.03.patch

Ok Steve. Here's #3.

> Make Kdiag check whether hadoop.token.files points to existent and valid files
> --
>
> Key: HADOOP-13018
> URL: https://issues.apache.org/jira/browse/HADOOP-13018
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HADOOP-13018.01.patch, HADOOP-13018.02.patch, 
> HADOOP-13018.03.patch
>
>
> Steve proposed that KDiag should fail fast to help debug the case that 
> hadoop.token.files points to a file not found. This JIRA is to affect that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12932) bin/yarn work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12932:
--
Status: Patch Available  (was: Open)

> bin/yarn work for dynamic subcommands
> -
>
> Key: HADOOP-12932
> URL: https://issues.apache.org/jira/browse/HADOOP-12932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12932-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable yarn_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12934) bin/mapred work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12934:
--
Status: Patch Available  (was: Open)

> bin/mapred work for dynamic subcommands
> ---
>
> Key: HADOOP-12934
> URL: https://issues.apache.org/jira/browse/HADOOP-12934
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12934-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable mapred_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12934) bin/mapred work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12934:
--
Description: Do the necessary plumbing to enable mapred_subcmd_blah

> bin/mapred work for dynamic subcommands
> ---
>
> Key: HADOOP-12934
> URL: https://issues.apache.org/jira/browse/HADOOP-12934
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12934-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable mapred_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12934) bin/mapred work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12934:
-

Assignee: Allen Wittenauer

> bin/mapred work for dynamic subcommands
> ---
>
> Key: HADOOP-12934
> URL: https://issues.apache.org/jira/browse/HADOOP-12934
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12934-HADOOP-12930.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12932) bin/yarn work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12932:
--
Attachment: HADOOP-12932-HADOOP-12930.00.patch

> bin/yarn work for dynamic subcommands
> -
>
> Key: HADOOP-12932
> URL: https://issues.apache.org/jira/browse/HADOOP-12932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12932-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable yarn_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12932) bin/yarn work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12932:
--
Description: 
Do the necessary plumbing to enable yarn_subcmd_blah


> bin/yarn work for dynamic subcommands
> -
>
> Key: HADOOP-12932
> URL: https://issues.apache.org/jira/browse/HADOOP-12932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>
> Do the necessary plumbing to enable yarn_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12932) bin/yarn work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12932:
-

Assignee: Allen Wittenauer

> bin/yarn work for dynamic subcommands
> -
>
> Key: HADOOP-12932
> URL: https://issues.apache.org/jira/browse/HADOOP-12932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> Do the necessary plumbing to enable yarn_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12933) bin/hdfs work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12933:
--
Attachment: HADOOP-12933-HADOOP-12930.00.patch

> bin/hdfs work for dynamic subcommands
> -
>
> Key: HADOOP-12933
> URL: https://issues.apache.org/jira/browse/HADOOP-12933
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12933-HADOOP-12930.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12933) bin/hdfs work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12933:
--
Description: 
Do the necessary plumbing to enable hdfs_subcmd_blah


> bin/hdfs work for dynamic subcommands
> -
>
> Key: HADOOP-12933
> URL: https://issues.apache.org/jira/browse/HADOOP-12933
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12933-HADOOP-12930.00.patch
>
>
> Do the necessary plumbing to enable hdfs_subcmd_blah



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12933) bin/hdfs work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12933:
--
Status: Patch Available  (was: Open)

> bin/hdfs work for dynamic subcommands
> -
>
> Key: HADOOP-12933
> URL: https://issues.apache.org/jira/browse/HADOOP-12933
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12933-HADOOP-12930.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12933) bin/hdfs work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12933:
-

Assignee: Allen Wittenauer

> bin/hdfs work for dynamic subcommands
> -
>
> Key: HADOOP-12933
> URL: https://issues.apache.org/jira/browse/HADOOP-12933
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12933-HADOOP-12930.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12931) bin/hadoop work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12931:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12930
   Status: Resolved  (was: Patch Available)

> bin/hadoop work for dynamic subcommands
> ---
>
> Key: HADOOP-12931
> URL: https://issues.apache.org/jira/browse/HADOOP-12931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: HADOOP-12930
>
> Attachments: HADOOP-12931-HADOOP-12930.01.patch, HADOOP-12931.00.patch
>
>
> Do the necessary plumbing to enable hadoop_subcmd_blah 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12931) bin/hadoop work for dynamic subcommands

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269163#comment-15269163
 ] 

Hadoop QA commented on HADOOP-12931:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
36s {color} | {color:green} HADOOP-12930 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} HADOOP-12930 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
10s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 47s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801987/HADOOP-12931-HADOOP-12930.01.patch
 |
| JIRA Issue | HADOOP-12931 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 84c758383eff 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12930 / bc7cda2 |
| shellcheck | v0.4.3 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9260/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9260/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> bin/hadoop work for dynamic subcommands
> ---
>
> Key: HADOOP-12931
> URL: https://issues.apache.org/jira/browse/HADOOP-12931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12931-HADOOP-12930.01.patch, HADOOP-12931.00.patch
>
>
> Do the necessary plumbing to enable hadoop_subcmd_blah 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13082) RawLocalFileSystem does not fail when moving file to a non-existing directory

2016-05-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13082:
--
Description: 
FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates a 
file then move it to a non-existing directory. It should fail but it will not 
(with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, Path) 
method we have a fallback behavior that accomplishes the rename by a full copy. 
The full copy will create the new directory and copy the file there.

I see two possible solutions here:
# Remove the fallback full copy behavior
# Before full cp we should check whether the parent directory exists or not. If 
not return false an do not do the full copy.

The fallback logic was added by 
[HADOOP-9805|https://issues.apache.org/jira/browse/HADOOP-9805].

  was:
FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates a 
file then move it to a non-existing directory. It should fail but it will not 
(with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, Path) 
method we have a fallback behavior that accomplishes the rename by a full copy. 
The full copy will create the new directory and copy the file there.

I see two possible solutions here:
# Remove the fallback full copy behavior
# Before full cp we should check whether the parent directory exists or not. If 
not return false an do not do the full copy.


> RawLocalFileSystem does not fail when moving file to a non-existing directory
> -
>
> Key: HADOOP-13082
> URL: https://issues.apache.org/jira/browse/HADOOP-13082
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates 
> a file then move it to a non-existing directory. It should fail but it will 
> not (with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, 
> Path) method we have a fallback behavior that accomplishes the rename by a 
> full copy. The full copy will create the new directory and copy the file 
> there.
> I see two possible solutions here:
> # Remove the fallback full copy behavior
> # Before full cp we should check whether the parent directory exists or not. 
> If not return false an do not do the full copy.
> The fallback logic was added by 
> [HADOOP-9805|https://issues.apache.org/jira/browse/HADOOP-9805].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12931) bin/hadoop work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12931:
--
Attachment: HADOOP-12931-HADOOP-12930.01.patch

> bin/hadoop work for dynamic subcommands
> ---
>
> Key: HADOOP-12931
> URL: https://issues.apache.org/jira/browse/HADOOP-12931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12931-HADOOP-12930.01.patch, HADOOP-12931.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12931) bin/hadoop work for dynamic subcommands

2016-05-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12931:
--
Description: Do the necessary plumbing to enable hadoop_subcmd_blah 

> bin/hadoop work for dynamic subcommands
> ---
>
> Key: HADOOP-12931
> URL: https://issues.apache.org/jira/browse/HADOOP-12931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12931-HADOOP-12930.01.patch, HADOOP-12931.00.patch
>
>
> Do the necessary plumbing to enable hadoop_subcmd_blah 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12930) [Umbrella] Dynamic subcommands for hadoop shell scripts

2016-05-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269074#comment-15269074
 ] 

Allen Wittenauer commented on HADOOP-12930:
---

rebased HADOOP-12930 with trunk

> [Umbrella] Dynamic subcommands for hadoop shell scripts
> ---
>
> Key: HADOOP-12930
> URL: https://issues.apache.org/jira/browse/HADOOP-12930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
>
> Umbrella for converting hadoop, hdfs, mapred, and yarn to allow for dynamic 
> subcommands. See first comment for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269081#comment-15269081
 ] 

Hadoop QA commented on HADOOP-12101:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 6s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
10s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 7s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_92. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 35s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_92 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801968/HADOOP-12101.016.patch
 |
| JIRA Issue | HADOOP-12101 |
| Optional Tests |  asflicense  mvnsite  

[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269029#comment-15269029
 ] 

Allen Wittenauer commented on HADOOP-13079:
---

bq. The most popular UNIX system on Earth, Linux

Linux is not UNIX(tm).  Please stop saying things that aren't true.

bq.  so nobody will be surprised by it.

... except for those that actually use Real UNIX(tm) and not some discount 
knock off whose overly zealous followers believe they invented everything.

Keep in mind also that in order to do -q, you need an anti-q.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13073) RawLocalFileSystem does not react on changing umask

2016-05-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13073:
--
Issue Type: Bug  (was: Test)

> RawLocalFileSystem does not react on changing umask
> ---
>
> Key: HADOOP-13073
> URL: https://issues.apache.org/jira/browse/HADOOP-13073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-13073.01.patch
>
>
> FileSystemContractBaseTest#testMkdirsWithUmask is changing umask under the 
> filesystem. RawLocalFileSystem reads the config on startup so it will not 
> react if we change the umask.
> It blocks [HADOOP-7363|https://issues.apache.org/jira/browse/HADOOP-7363] 
> since testMkdirsWithUmask test will never work with RawLocalFileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13082) RawLocalFileSystem does not fail when moving file to a non-existing directory

2016-05-03 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269014#comment-15269014
 ] 

Andras Bokor commented on HADOOP-13082:
---

[~ste...@apache.org] [~mattf] [~arpitagarwal],
Kindly advise.

> RawLocalFileSystem does not fail when moving file to a non-existing directory
> -
>
> Key: HADOOP-13082
> URL: https://issues.apache.org/jira/browse/HADOOP-13082
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates 
> a file then move it to a non-existing directory. It should fail but it will 
> not (with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, 
> Path) method we have a fallback behavior that accomplishes the rename by a 
> full copy. The full copy will create the new directory and copy the file 
> there.
> I see two possible solutions here:
> # Remove the fallback full copy behavior
> # Before full cp we should check whether the parent directory exists or not. 
> If not return false an do not do the full copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13082) RawLocalFileSystem does not fail when moving file to a non-existing directory

2016-05-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13082:
--
Issue Type: Bug  (was: Test)

> RawLocalFileSystem does not fail when moving file to a non-existing directory
> -
>
> Key: HADOOP-13082
> URL: https://issues.apache.org/jira/browse/HADOOP-13082
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates 
> a file then move it to a non-existing directory. It should fail but it will 
> not (with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, 
> Path) method we have a fallback behavior that accomplishes the rename by a 
> full copy. The full copy will create the new directory and copy the file 
> there.
> I see two possible solutions here:
> # Remove the fallback full copy behavior
> # Before full cp we should check whether the parent directory exists or not. 
> If not return false an do not do the full copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13082) RawLocalFileSystem does not fail when moving file to a non-existing directory

2016-05-03 Thread Yahoo! No Reply (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269003#comment-15269003
 ] 

Yahoo! No Reply commented on HADOOP-13082:
--


This is an automatically generated message.

ran...@yahoo-inc.com is no longer with Yahoo! Inc.

Your message will not be forwarded.

If you have a sales inquiry, please email yahoosa...@yahoo-inc.com and someone 
will follow up with you shortly.

If you require assistance with a legal matter, please send a message to 
legal-noti...@yahoo-inc.com

Thank you!


> RawLocalFileSystem does not fail when moving file to a non-existing directory
> -
>
> Key: HADOOP-13082
> URL: https://issues.apache.org/jira/browse/HADOOP-13082
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates 
> a file then move it to a non-existing directory. It should fail but it will 
> not (with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, 
> Path) method we have a fallback behavior that accomplishes the rename by a 
> full copy. The full copy will create the new directory and copy the file 
> there.
> I see two possible solutions here:
> # Remove the fallback full copy behavior
> # Before full cp we should check whether the parent directory exists or not. 
> If not return false an do not do the full copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13082) RawLocalFileSystem does not fail when moving file to a non-existing directory

2016-05-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13082:
--
Description: 
FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates a 
file then move it to a non-existing directory. It should fail but it will not 
(with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, Path) 
method we have a fallback behavior that accomplishes the rename by a full copy. 
The full copy will create the new directory and copy the file there.

I see two possible solutions here:
# Remove the fallback full copy behavior
# Before full cp we should check whether the parent directory exists or not. If 
not return false an do not do the full copy.

  was:
FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates a 
file then move it to a non-existing directory. It should fail but it will not 
(with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, Path) 
method we have a fallback behavior that accomplishes the rename by a full copy. 
The full copy will create the new directory and copy the file there.

I see two possible solutions here:
#Remove the fallback full copy behavior
#Before full cp we should check whether the parent directory exists or not. If 
not return false an do not do the full copy.


> RawLocalFileSystem does not fail when moving file to a non-existing directory
> -
>
> Key: HADOOP-13082
> URL: https://issues.apache.org/jira/browse/HADOOP-13082
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates 
> a file then move it to a non-existing directory. It should fail but it will 
> not (with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, 
> Path) method we have a fallback behavior that accomplishes the rename by a 
> full copy. The full copy will create the new directory and copy the file 
> there.
> I see two possible solutions here:
> # Remove the fallback full copy behavior
> # Before full cp we should check whether the parent directory exists or not. 
> If not return false an do not do the full copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13082) RawLocalFileSystem does not fail when moving file to a non-existing directory

2016-05-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13082:
--
Description: 
FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates a 
file then move it to a non-existing directory. It should fail but it will not 
(with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, Path) 
method we have a fallback behavior that accomplishes the rename by a full copy. 
The full copy will create the new directory and copy the file there.

I see two possible solutions here:
#Remove the fallback full copy behavior
#Before full cp we should check whether the parent directory exists or not. If 
not return false an do not do the full copy.

  was:
FileSystemContractBaseTest#testMkdirsWithUmask is changing umask under the 
filesystem. RawLocalFileSystem reads the config on startup so it will not react 
if we change the umask.
It blocks [HADOOP-7363|https://issues.apache.org/jira/browse/HADOOP-7363] since 
testMkdirsWithUmask test will never work with RawLocalFileSystem.


> RawLocalFileSystem does not fail when moving file to a non-existing directory
> -
>
> Key: HADOOP-13082
> URL: https://issues.apache.org/jira/browse/HADOOP-13082
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates 
> a file then move it to a non-existing directory. It should fail but it will 
> not (with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, 
> Path) method we have a fallback behavior that accomplishes the rename by a 
> full copy. The full copy will create the new directory and copy the file 
> there.
> I see two possible solutions here:
> #Remove the fallback full copy behavior
> #Before full cp we should check whether the parent directory exists or not. 
> If not return false an do not do the full copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13082) RawLocalFileSystem does not fail when moving file to a non-existing directory

2016-05-03 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-13082:
-

 Summary: RawLocalFileSystem does not fail when moving file to a 
non-existing directory
 Key: HADOOP-13082
 URL: https://issues.apache.org/jira/browse/HADOOP-13082
 Project: Hadoop Common
  Issue Type: Test
  Components: fs
Affects Versions: 0.23.0
Reporter: Andras Bokor
Assignee: Andras Bokor


FileSystemContractBaseTest#testMkdirsWithUmask is changing umask under the 
filesystem. RawLocalFileSystem reads the config on startup so it will not react 
if we change the umask.
It blocks [HADOOP-7363|https://issues.apache.org/jira/browse/HADOOP-7363] since 
testMkdirsWithUmask test will never work with RawLocalFileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268981#comment-15268981
 ] 

Colin Patrick McCabe commented on HADOOP-13079:
---

Thank you for the background information.  I wasn't aware that the default of 
suppressing non-printing characters was "optional" according to POSIX.

I think the important thing is that we've established that:
* Suppressing non-printing characters by default fixes several serious security 
vulnerabilties, including some that have CVEs,
* This suppression behavior is explicitly allowed by POSIX,
* The most popular UNIX system on Earth, Linux, implements this behavior, so 
nobody will be surprised by it.

bq. Essentially interactive sessions with stdin redirected \[falsely show up as 
non-interactive from Java\]

I guess my concern about adding a JNI dependency here is that it will make 
things too nondeterministic.  I've seen too many clusters where JNI was 
improperly configured.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268961#comment-15268961
 ] 

Colin Patrick McCabe commented on HADOOP-13065:
---

Interesting post... I wasn't aware that AtomicLong etc. had performance issues.

However, I don't think we need an API for updating metrics.  We only need an 
API for _reading_ metrics.  The current read API in this patch supports reading 
primitive longs, which should work well with {{AtomicLongFieldUpdater}}, or 
whatever else we want to use.

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Attachments: HADOOP-13065-007.patch, HDFS-10175.000.patch, 
> HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, 
> HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, 
> TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-03 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268955#comment-15268955
 ] 

Inigo Goiri commented on HADOOP-13080:
--

Thanks you [~chris.douglas]!

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: HADOOP-13080-v0.patch, HADOOP-13080-v1.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-03 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-12101:

Attachment: HADOOP-12101.016.patch

- More shellcheck cleanup

> Add automatic search of default Configuration variables to 
> TestConfigurationFieldsBase
> --
>
> Key: HADOOP-12101
> URL: https://issues.apache.org/jira/browse/HADOOP-12101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: HADOOP-12101.001.patch, HADOOP-12101.002.patch, 
> HADOOP-12101.003.patch, HADOOP-12101.004.patch, HADOOP-12101.005.patch, 
> HADOOP-12101.006.patch, HADOOP-12101.007.patch, HADOOP-12101.008.patch, 
> HADOOP-12101.009.patch, HADOOP-12101.010.patch, HADOOP-12101.011.patch, 
> HADOOP-12101.012.patch, HADOOP-12101.013.patch, HADOOP-12101.014.patch, 
> HADOOP-12101.015.patch, HADOOP-12101.016.patch
>
>
> Add functionality given a Configuration variable FOO, to at least check the 
> xml file value against DEFAULT_FOO.
> Without waivers and a mapping for exceptions, this can probably never be a 
> test method that generates actual errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10694) Remove synchronized input streams from Writable deserialization

2016-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268870#comment-15268870
 ] 

Hadoop QA commented on HADOOP-10694:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
55s {color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
12s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 2 unchanged - 3 fixed = 2 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 18s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_92. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 38s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 1s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_92 Failed junit tests | hadoop.net.TestDNS |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801942/HADOOP-10694.2.patch |
| JIRA Issue | HADOOP-10694 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstal

[jira] [Commented] (HADOOP-12868) hadoop-openstack's pom has missing and unused dependencies

2016-05-03 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268796#comment-15268796
 ] 

Masatake Iwasaki commented on HADOOP-12868:
---

Thanks for the comment, [~andrew.wang]. I think I have run {{mvn 
dependency:analyze -Ptests-on}} before I posted the patch since tests are 
disabled by default in hadoop-openstack.. 

It is not unused dependency even in today's trunk because {{mvn test-compile 
-Ptests-on}} failed without the dependency on hadoop-common:test-jar.

> hadoop-openstack's pom has missing and unused dependencies
> --
>
> Key: HADOOP-12868
> URL: https://issues.apache.org/jira/browse/HADOOP-12868
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12868.001.patch
>
>
> Attempting to compile openstack on a fairly fresh maven repo fails due to 
> commons-httpclient not being a declared dependency.  After that is fixed, 
> doing a maven dependency:analyze shows other problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268768#comment-15268768
 ] 

Allen Wittenauer commented on HADOOP-13079:
---

... which is just the long form of what I just said. :)

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-9819) FileSystem#rename is broken, deletes target when renaming link to itself

2016-05-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-9819:


Assignee: Andras Bokor  (was: Ajith S)

> FileSystem#rename is broken, deletes target when renaming link to itself
> 
>
> Key: HADOOP-9819
> URL: https://issues.apache.org/jira/browse/HADOOP-9819
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
> Attachments: HADOOP-9819.01.patch
>
>
> Uncovered while fixing TestSymlinkLocalFsFileSystem on Windows.
> This block of code deletes the symlink, the correct behavior is to do nothing.
> {code:java}
> try {
>   dstStatus = getFileLinkStatus(dst);
> } catch (IOException e) {
>   dstStatus = null;
> }
> if (dstStatus != null) {
>   if (srcStatus.isDirectory() != dstStatus.isDirectory()) {
> throw new IOException("Source " + src + " Destination " + dst
> + " both should be either file or directory");
>   }
>   if (!overwrite) {
> throw new FileAlreadyExistsException("rename destination " + dst
> + " already exists.");
>   }
>   // Delete the destination that is a file or an empty directory
>   if (dstStatus.isDirectory()) {
> FileStatus[] list = listStatus(dst);
> if (list != null && list.length != 0) {
>   throw new IOException(
>   "rename cannot overwrite non empty destination directory " + 
> dst);
> }
>   }
>   delete(dst, false);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10694) Remove synchronized input streams from Writable deserialization

2016-05-03 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-10694:
--
Status: Patch Available  (was: Open)

> Remove synchronized input streams from Writable deserialization
> ---
>
> Key: HADOOP-10694
> URL: https://issues.apache.org/jira/browse/HADOOP-10694
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10694.1.patch, HADOOP-10694.2.patch, 
> writable-read-sync.png
>
>
> Writable deserialization is slowing down due to a synchronized block within 
> DataInputBuffer$Buffer.
> ByteArrayInputStream::read() is synchronized and this shows up as a slow 
> uncontested lock.
> Hive ships with its own faster thread-unsafe version with 
> hive.common.io.NonSyncByteArrayInputStream.
> !writable-read-sync.png!
> The DataInputBuffer and Writable deserialization should not require a lock 
> per readInt()/read().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11786) Fix Javadoc typos in org.apache.hadoop.fs.FileSystem

2016-05-03 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268718#comment-15268718
 ] 

Andras Bokor commented on HADOOP-11786:
---

[~airbots] Could you please review my patch? Thanks in advance.

> Fix Javadoc typos in org.apache.hadoop.fs.FileSystem
> 
>
> Key: HADOOP-11786
> URL: https://issues.apache.org/jira/browse/HADOOP-11786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Chen He
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: newbie++
> Attachments: HADOOP-11786.patch
>
>
> /**
>  * Resets all statistics to 0.
>  *
>  * In order to reset, we add up all the thread-local statistics data, and
>  * set rootData to the negative of that.
>  *
>  * This may seem like a counterintuitive way to reset the statsitics.  Why
>  * can't we just zero out all the thread-local data?  Well, thread-local
>  * data can only be modified by the thread that owns it.  If we tried to
>  * modify the thread-local data from this thread, our modification might 
> get
>  * interleaved with a read-modify-write operation done by the thread that
>  * owns the data.  That would result in our update getting lost.
>  *
>  * The approach used here avoids this problem because it only ever reads
>  * (not writes) the thread-local data.  Both reads and writes to rootData
>  * are done under the lock, so we're free to modify rootData from any 
> thread
>  * that holds the lock.
>  */
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9819) FileSystem#rename is broken, deletes target when renaming link to itself

2016-05-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-9819:
-
Attachment: HADOOP-9819.01.patch

[~arpitagarwal] I attached [^HADOOP-9819.01.patch] I used the same solution 
then {code}AbstractFileSystem{code}.
I think {code}SymlinkBaseTest#testRenameSymlinkToItself{code} fine as it is so 
we have nothing to do with that.
Could you please review?

> FileSystem#rename is broken, deletes target when renaming link to itself
> 
>
> Key: HADOOP-9819
> URL: https://issues.apache.org/jira/browse/HADOOP-9819
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Ajith S
> Attachments: HADOOP-9819.01.patch
>
>
> Uncovered while fixing TestSymlinkLocalFsFileSystem on Windows.
> This block of code deletes the symlink, the correct behavior is to do nothing.
> {code:java}
> try {
>   dstStatus = getFileLinkStatus(dst);
> } catch (IOException e) {
>   dstStatus = null;
> }
> if (dstStatus != null) {
>   if (srcStatus.isDirectory() != dstStatus.isDirectory()) {
> throw new IOException("Source " + src + " Destination " + dst
> + " both should be either file or directory");
>   }
>   if (!overwrite) {
> throw new FileAlreadyExistsException("rename destination " + dst
> + " already exists.");
>   }
>   // Delete the destination that is a file or an empty directory
>   if (dstStatus.isDirectory()) {
> FileStatus[] list = listStatus(dst);
> if (list != null && list.length != 0) {
>   throw new IOException(
>   "rename cannot overwrite non empty destination directory " + 
> dst);
> }
>   }
>   delete(dst, false);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-03 Thread Esther Kundin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268702#comment-15268702
 ] 

Esther Kundin commented on HADOOP-12291:


Ok, I see your point.  I will make the changes suggested and upload a new patch.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13068) Clean up RunJar and related test class

2016-05-03 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268665#comment-15268665
 ] 

Andras Bokor commented on HADOOP-13068:
---

Thanks a log [~arpitagarwal]. You are right, the {code}{code} is missing from the xml.
Do you see anything else with my patch or it is good to go?

> Clean up RunJar and related test class
> --
>
> Key: HADOOP-13068
> URL: https://issues.apache.org/jira/browse/HADOOP-13068
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.7.2
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-13068.01.patch, HADOOP-13068.02.patch, 
> HADOOP-13068.03.patch
>
>
> Clean up RunJar and related test class to remove IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10694) Remove synchronized input streams from Writable deserialization

2016-05-03 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-10694:
--
Attachment: (was: HADOOP-10694.2.patch)

> Remove synchronized input streams from Writable deserialization
> ---
>
> Key: HADOOP-10694
> URL: https://issues.apache.org/jira/browse/HADOOP-10694
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10694.1.patch, HADOOP-10694.2.patch, 
> writable-read-sync.png
>
>
> Writable deserialization is slowing down due to a synchronized block within 
> DataInputBuffer$Buffer.
> ByteArrayInputStream::read() is synchronized and this shows up as a slow 
> uncontested lock.
> Hive ships with its own faster thread-unsafe version with 
> hive.common.io.NonSyncByteArrayInputStream.
> !writable-read-sync.png!
> The DataInputBuffer and Writable deserialization should not require a lock 
> per readInt()/read().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10694) Remove synchronized input streams from Writable deserialization

2016-05-03 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-10694:
--
Attachment: HADOOP-10694.2.patch

> Remove synchronized input streams from Writable deserialization
> ---
>
> Key: HADOOP-10694
> URL: https://issues.apache.org/jira/browse/HADOOP-10694
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10694.1.patch, HADOOP-10694.2.patch, 
> writable-read-sync.png
>
>
> Writable deserialization is slowing down due to a synchronized block within 
> DataInputBuffer$Buffer.
> ByteArrayInputStream::read() is synchronized and this shows up as a slow 
> uncontested lock.
> Hive ships with its own faster thread-unsafe version with 
> hive.common.io.NonSyncByteArrayInputStream.
> !writable-read-sync.png!
> The DataInputBuffer and Writable deserialization should not require a lock 
> per readInt()/read().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10694) Remove synchronized input streams from Writable deserialization

2016-05-03 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-10694:
--
Attachment: HADOOP-10694.2.patch

> Remove synchronized input streams from Writable deserialization
> ---
>
> Key: HADOOP-10694
> URL: https://issues.apache.org/jira/browse/HADOOP-10694
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10694.1.patch, HADOOP-10694.2.patch, 
> writable-read-sync.png
>
>
> Writable deserialization is slowing down due to a synchronized block within 
> DataInputBuffer$Buffer.
> ByteArrayInputStream::read() is synchronized and this shows up as a slow 
> uncontested lock.
> Hive ships with its own faster thread-unsafe version with 
> hive.common.io.NonSyncByteArrayInputStream.
> !writable-read-sync.png!
> The DataInputBuffer and Writable deserialization should not require a lock 
> per readInt()/read().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-05-03 Thread Bolke de Bruin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268493#comment-15268493
 ] 

Bolke de Bruin commented on HADOOP-12751:
-

Hi [~steve_l] any update on this?

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13035) Add states INITING and STARTING to YARN Service model to cover in-transition states.

2016-05-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268487#comment-15268487
 ] 

Steve Loughran commented on HADOOP-13035:
-

One thing to bear in mind about that state change lock is that as the 
{{serviceInit}} and {{serviceStart}} methods are called holding the lock, code 
of these methods can enter a new state while it's in progress. This is why the 
{{init()}} and {{start()}} methods have checks for whether they are still in 
the required state on exit. This is important because it allows {{serviceInit}} 
and {{serviceStart}} to invoke stop() and not block.

Anything playing with a new state change field would have to handle similar 
cases, especially {{serviceInit}} and calling {{start()}} . ( I don't know of 
anything which does that; I'm not sure if we should allow it, but it's there. 
stop can/may happen, which really complicates things if something like a 
contained service ever directly/indirectly called it's parent {{stop()}} call. 
There's also some unwinding in {{CompositeService.serviceStop()}} which only 
calls {{stop()}} on children which are explicitly in the {{STARTED}} 
state...we'd have to make sure that includes in-transition conditions 
(probably) or explicitly decided to ignoring the starting stuff.

The more I think about this, an expanded state model, with the existing 
accessor/enum retained as is, is probably the way to manage this. Users of the 
old API would get the STARTED state when the service was in STARTING/STARTED; 
users of the new API would get the detailed value. This would allow the inner 
state model to be complete, covering the logic of the various state transitions 
without having to also rely on some transitionary states. 


There's also some fun when we consider that today, INIT and STARTED are 
idempotent
{code}
if (enterState(STATE.INITED) != STATE.INITED) { ...}
{code}

That would actually get more complex with the extra states; perhaps 
{{enterState()}} would be changed to return a bool if a state change has 
*really* occurred, and both INITED/INITING and STARTED/STARTING treated as 
equivalent from the perspective of idempotent transitions. That is, 
STARTED.enterState(STARTING) => STARTED, STARTING.enterState(STARTED) => 
STARTED is needed, because it's how the start() operation would transit to its 
final state. Whatever patch gets in, making sure it is fully idempotent here 
will have to be looked at carefully


> Add states INITING and STARTING to YARN Service model to cover in-transition 
> states.
> 
>
> Key: HADOOP-13035
> URL: https://issues.apache.org/jira/browse/HADOOP-13035
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
> Attachments: 0001-HADOOP-13035.patch, 0002-HADOOP-13035.patch, 
> 0003-HADOOP-13035.patch
>
>
> As per the discussion in YARN-3971 the we should be setting the service state 
> to STARTED only after serviceStart() 
> Currently {{AbstractService#start()}} is set
> {noformat} 
>  if (stateModel.enterState(STATE.STARTED) != STATE.STARTED) {
> try {
>   startTime = System.currentTimeMillis();
>   serviceStart();
> ..
>  }
> {noformat}
> enterState sets the service state to proposed state. So in 
> {{service.getServiceState}} in {{serviceStart()}} will return STARTED .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268386#comment-15268386
 ] 

Steve Loughran commented on HADOOP-13065:
-

Related to this, I've come across some details about how the reflection-based 
{{AtomicLongFieldUpdater}} is going to become a lot closer to non-atomic 
{{volatile long ++}} calls: http://shipilev.net/blog/2015/faster-atomic-fu/

This argues in favour of using this operation for updating metrics 
directly...it'd mean that the variables could just be simple {{volatile long}}, 
with a static {{AtomicLongFieldUpdater}} used to update them all

{code}
public final class readMetrics {

private static final AtomicLongFieldUpdater bytesReadUpdater = 
AtomicLongFieldUpdater.newUpdater(Cell.class, "bytesRead");
private volatile long bytesRead;

public long incBytesRead(long count) {
  return bytesReadUpdater.addAndGet(count);
}
{code}

HBase uses this in {{org.apache.hadoop.hbase.util.Counter}}, where it's 
described as "High scalable counter. Thread safe."


> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Attachments: HADOOP-13065-007.patch, HDFS-10175.000.patch, 
> HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, 
> HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, 
> TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2016-05-03 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268366#comment-15268366
 ] 

John Zhuge commented on HADOOP-12990:
-

[~cavanaug], lz4 command line and hadoop-lz4 use the same lz4 codec library. 
The difference is only the framing, see my comment and hack on 4/3.

Questions for your use case:
* Do your JSON files contain a single JSON object or many JSON records?
* After ingesting into HDFS, how do you plan to use the data?
* Have considered these splittable container file formats with compression: 
SequenceFile, RCFile, ORC, Avro, Parquet? In the container, they can choose any 
Hadoop codec, including LZ4.


> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-05-03 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268319#comment-15268319
 ] 

Kai Zheng commented on HADOOP-12782:


I took a look more closely at the codes. It looks good overall. I'd suggest we 
refresh this to avoid the word *Fast* for the reason I mentioned above, when 
convenient. Also some minor comments:
* We could remove the variable {{useFastLookup}}, instead use {{memberOfAttr != 
null}} directly to indicate the case.
* Ref. the following block, we could embed the logic in {{fastLookup}} directly 
to simplify some bit?
{code}
if (useFastLookup) {
  try {
return fastLookup(result);
  } catch (NamingException e) {
// If fast lookup failed, fall back to the typical scenario.
LOG.debug("Failed in fast lookup. Initiating the second LDAP query " +
"using the user's DN.", e);
  }
}
{code}
* Please have a line break between functions; 
* Please also avoid star imports;
* Could we remove the word *experimental* in the user doc?

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12782.001.patch, HADOOP-12782.002.patch, 
> HADOOP-12782.003.patch, HADOOP-12782.004.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org