[jira] [Commented] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268158#comment-15268158
 ] 

Hudson commented on HADOOP-13080:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9705 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9705/])
HADOOP-13080. Refresh time in SysInfoWindows is in nanoseconds. (cdouglas: rev 
c1cc6ac667e9e1b2ed58f16cb9fa1584ea54f0ac)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SysInfoWindows.java


> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: HADOOP-13080-v0.patch, HADOOP-13080-v1.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13080:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I committed this. Thanks Inigo

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: HADOOP-13080-v0.patch, HADOOP-13080-v1.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12504) Remove metrics v1

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268079#comment-15268079
 ] 

Hadoop QA commented on HADOOP-12504:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 6s 
{color} | {color:green} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 0 new + 
668 unchanged - 55 fixed = 668 total (was 723) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 58s 
{color} | {color:green} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 0 new + 
678 unchanged - 41 fixed = 678 total (was 719) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 60 unchanged - 433 fixed = 60 total (was 493) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 24s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 30s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
34s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801873/HADOOP-12504.02.patch 
|
| JIRA Issue | HADOOP-12504 |
| Optional Tests |  asflicense  mvnsite  unit  compile  javac  javadoc  
mvninstall  findbugs  checkstyle  |
| uname | Linux 7e592ea82234 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 

[jira] [Updated] (HADOOP-13030) Handle special characters in passwords in KMS startup script

2016-05-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13030:
---
Labels: supportability  (was: )

> Handle special characters in passwords in KMS startup script
> 
>
> Key: HADOOP-13030
> URL: https://issues.apache.org/jira/browse/HADOOP-13030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-13030-repro.tar.gz, HADOOP-13030.01.patch, 
> HADOOP-13030.02.patch, HADOOP-13030.03.patch, HADOOP-13030.b28.patch
>
>
> {{kms.sh}} currently cannot handle special characters.
> {code}
>  sed -e 's/_kms_ssl_keystore_pass_/'${KMS_SSL_KEYSTORE_PASS}'/g' \
> -e 's/_kms_ssl_truststore_pass_/'${KMS_SSL_TRUSTSTORE_PASS}'/g' \
> "${HADOOP_CATALINA_HOME}/conf/ssl-server.xml.conf" \
> > "${HADOOP_CATALINA_HOME}/conf/ssl-server.xml"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267979#comment-15267979
 ] 

Hadoop QA commented on HADOOP-13080:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 2s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 24s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 51s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801868/HADOOP-13080-v1.patch 
|
| JIRA Issue | HADOOP-13080 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dc8f1e715a05 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267949#comment-15267949
 ] 

Hadoop QA commented on HADOOP-12101:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 10s 
{color} | {color:red} The patch generated 12 new + 96 unchanged - 0 fixed = 108 
total (was 96) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 53s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 58s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801854/HADOOP-12101.015.patch
 |
| JIRA Issue | HADOOP-12101 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  findbugs  checkstyle  |
| uname | Linux 

[jira] [Updated] (HADOOP-12504) Remove metrics v1

2016-05-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12504:
---
Status: Patch Available  (was: Open)

> Remove metrics v1
> -
>
> Key: HADOOP-12504
> URL: https://issues.apache.org/jira/browse/HADOOP-12504
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Blocker
> Attachments: HADOOP-12054.00.patch, HADOOP-12054.01.patch, 
> HADOOP-12504.02.patch
>
>
> After HADOOP-7266, we should remove metrics v1 from trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12504) Remove metrics v1

2016-05-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12504:
---
Attachment: HADOOP-12504.02.patch

v2 patch: Removed hadoop-metrics.properties

> Remove metrics v1
> -
>
> Key: HADOOP-12504
> URL: https://issues.apache.org/jira/browse/HADOOP-12504
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Blocker
> Attachments: HADOOP-12054.00.patch, HADOOP-12054.01.patch, 
> HADOOP-12504.02.patch
>
>
> After HADOOP-7266, we should remove metrics v1 from trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12504) Remove metrics v1

2016-05-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12504:
---
Attachment: HADOOP-12054.01.patch

We need to simply remove metrics v1 because MAPREDUCE-6526 has been committed.

> Remove metrics v1
> -
>
> Key: HADOOP-12504
> URL: https://issues.apache.org/jira/browse/HADOOP-12504
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Blocker
> Attachments: HADOOP-12054.00.patch, HADOOP-12054.01.patch
>
>
> After HADOOP-7266, we should remove metrics v1 from trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10895) HTTP KerberosAuthenticator fallback should have a flag to disable it

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267879#comment-15267879
 ] 

Hadoop QA commented on HADOOP-10895:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-10895 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12686796/HADOOP-10895.009.patch
 |
| JIRA Issue | HADOOP-10895 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9255/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> HTTP KerberosAuthenticator fallback should have a flag to disable it
> 
>
> Key: HADOOP-10895
> URL: https://issues.apache.org/jira/browse/HADOOP-10895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Yongjun Zhang
>Priority: Blocker
> Attachments: HADOOP-10895.001.patch, HADOOP-10895.002.patch, 
> HADOOP-10895.003.patch, HADOOP-10895.003v1.patch, HADOOP-10895.003v2.patch, 
> HADOOP-10895.003v2improved.patch, HADOOP-10895.004.patch, 
> HADOOP-10895.005.patch, HADOOP-10895.006.patch, HADOOP-10895.007.patch, 
> HADOOP-10895.008.patch, HADOOP-10895.009.patch
>
>
> Per review feedback in HADOOP-10771, {{KerberosAuthenticator}} and the 
> delegation token version coming in with HADOOP-10771 should have a flag to 
> disable fallback to pseudo, similarly to the one that was introduced in 
> Hadoop RPC client with HADOOP-9698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-05-02 Thread Yahoo! No Reply (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267878#comment-15267878
 ] 

Yahoo! No Reply commented on HADOOP-12892:
--


This is an automatically generated message.

ran...@yahoo-inc.com is no longer with Yahoo! Inc.

Your message will not be forwarded.

If you have a sales inquiry, please email yahoosa...@yahoo-inc.com and someone 
will follow up with you shortly.

If you require assistance with a legal matter, please send a message to 
legal-noti...@yahoo-inc.com

Thank you!


> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12892) fix/rewrite create-release

2016-05-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-12892:
--

I'm reopening this since we need this in branch-2.8 to handle the new 
releasedocs generation. I made a quick cherry-pick attempt and the ISA-L stuff 
at least needs to be removed.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13080:
-
Attachment: HADOOP-13080-v1.patch

Fixing checkstyle.

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch, HADOOP-13080-v1.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11793) Update create-release for releasedocmaker.py

2016-05-02 Thread Yahoo! No Reply (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267842#comment-15267842
 ] 

Yahoo! No Reply commented on HADOOP-11793:
--


This is an automatically generated message.

ran...@yahoo-inc.com is no longer with Yahoo! Inc.

Your message will not be forwarded.

If you have a sales inquiry, please email yahoosa...@yahoo-inc.com and someone 
will follow up with you shortly.

If you require assistance with a legal matter, please send a message to 
legal-noti...@yahoo-inc.com

Thank you!


> Update create-release for releasedocmaker.py
> 
>
> Key: HADOOP-11793
> URL: https://issues.apache.org/jira/browse/HADOOP-11793
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Allen Wittenauer
>Assignee: ramtin
> Attachments: HADOOP-11793.001.patch
>
>
> With the commit of HADOOP-11731, the changelog and release note data is now 
> automated with the build.  The create-release script needs to do the correct 
> thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11793) Update create-release for releasedocmaker.py

2016-05-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-11793.
--
Resolution: Duplicate

Duping this one, since it's handled by the create-release rewrite in 
HADOOP-12892. Thanks all!

> Update create-release for releasedocmaker.py
> 
>
> Key: HADOOP-11793
> URL: https://issues.apache.org/jira/browse/HADOOP-11793
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Allen Wittenauer
>Assignee: ramtin
> Attachments: HADOOP-11793.001.patch
>
>
> With the commit of HADOOP-11731, the changelog and release note data is now 
> automated with the build.  The create-release script needs to do the correct 
> thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267838#comment-15267838
 ] 

Hadoop QA commented on HADOOP-13080:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 25s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 1s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 17s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801795/HADOOP-13080-v0.patch 
|
| JIRA Issue | HADOOP-13080 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 32fc86f19786 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build 

[jira] [Commented] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267791#comment-15267791
 ] 

Hadoop QA commented on HADOOP-12101:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 0s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 10s 
{color} | {color:red} The patch generated 12 new + 96 unchanged - 0 fixed = 108 
total (was 96) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 24s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 15s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.ipc.TestIPC |
| JDK v1.8.0_91 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801793/HADOOP-12101.014.patch
 |
| 

[jira] [Comment Edited] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267764#comment-15267764
 ] 

Allen Wittenauer edited comment on HADOOP-13079 at 5/3/16 12:11 AM:


bq. It's not surprising, because it matches the traditional UNIX / Linux 
behavior. 

The defaulting of -q on is not traditional UNIX behavior.  It may be what GNU 
does ("Linux"), but it's not the expected, standard behavior according to the 
POSIX spec. (The POSIX spec, does, however, say that individual implementations 
may turn it on.)  The fact that -q is an stanard, single letter option and the 
way to turn it off is not should have been a very big hint.


was (Author: aw):
bq. It's not surprising, because it matches the traditional UNIX / Linux 
behavior. 

The defaulting of -q on is not traditional UNIX behavior.  It may be what GNU 
does ("Linux"), but it's not the expected, standard behavior according to the 
POSIX spec. (The POSIX spec, does, however, say that individual implementations 
may turn it on.)  The fact that -q is an option and the way to turn it off is 
not should have been a very big hint.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13080:
-
Status: Patch Available  (was: In Progress)

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13080:
-
Status: In Progress  (was: Patch Available)

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267764#comment-15267764
 ] 

Allen Wittenauer commented on HADOOP-13079:
---

bq. It's not surprising, because it matches the traditional UNIX / Linux 
behavior. 

The defaulting of -q on is not traditional UNIX behavior.  It may be what GNU 
does ("Linux"), but it's not the expected, standard behavior according to the 
POSIX spec. (The POSIX spec, does, however, say that individual implementations 
may turn it on.)  The fact that -q is an option and the way to turn it off is 
not should have been a very big hint.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10895) HTTP KerberosAuthenticator fallback should have a flag to disable it

2016-05-02 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267737#comment-15267737
 ] 

Yongjun Zhang commented on HADOOP-10895:


Hi [~andrew.wang],

Thanks for the pinging. Due to the incompatibility of this fix, 3.0 would be an 
opportunity to get it in. However, there is some related change to be done in 
other components, per earlier discussion in this jira. We'd need a consensus 
before we can spend time to move this forward (I may not have the bandwidth at 
this moment).  I wonder what other people think. Any comment [~tucu00], 
[~rkanter], [~atm] and [~hitliuyi]? thanks.





> HTTP KerberosAuthenticator fallback should have a flag to disable it
> 
>
> Key: HADOOP-10895
> URL: https://issues.apache.org/jira/browse/HADOOP-10895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Yongjun Zhang
>Priority: Blocker
> Attachments: HADOOP-10895.001.patch, HADOOP-10895.002.patch, 
> HADOOP-10895.003.patch, HADOOP-10895.003v1.patch, HADOOP-10895.003v2.patch, 
> HADOOP-10895.003v2improved.patch, HADOOP-10895.004.patch, 
> HADOOP-10895.005.patch, HADOOP-10895.006.patch, HADOOP-10895.007.patch, 
> HADOOP-10895.008.patch, HADOOP-10895.009.patch
>
>
> Per review feedback in HADOOP-10771, {{KerberosAuthenticator}} and the 
> delegation token version coming in with HADOOP-10771 should have a flag to 
> disable fallback to pseudo, similarly to the one that was introduced in 
> Hadoop RPC client with HADOOP-9698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267729#comment-15267729
 ] 

Chris Nauroth commented on HADOOP-13028:


[~ste...@apache.org], I've spent more time reading the seek code changes, and 
I'm pretty confident that they're correct overall, but I have a few more 
comments.

# {{S3AInputStream#closeStream}} has the following log message.  The text of 
the message indicates that it's logging {{contentLength}}, but really it's 
logging {{length}}.  I imagine {{length}} is really the more interesting thing 
here, and the message text should be changed?
{code}
  LOG.debug("Stream {} {}: {}; streamPos={}, nextReadPos={}," +
  " contentLength={}",
  uri, (shouldAbort ? "aborted":"closed"), reason, pos, nextReadPos,
  length);
{code}
# Actually, that makes me realize I am unclear about a change made in 
HADOOP-12444.  {{S3AInputStream#reopen}} has a stream length calculation that 
gets passed into the range request.
{code}
requestedStreamLen = (length < 0) ? this.contentLength :
Math.max(this.contentLength, (CLOSE_THRESHOLD + (targetPos + length)));
...
GetObjectRequest request = new GetObjectRequest(bucket, key)
.withRange(targetPos, requestedStreamLen);
{code}
Please tell me if I'm misunderstanding something, but I believe this 
calculation always results in an upper bound on the range that effectively 
means "get the whole thing."  That {{Math.max}} call guarantees that the value 
is always at least {{contentLength}}, which is the whole file length.  Is this 
a bug in the HADOOP-12444 patch?
# {{InputStreamStatistics#seekBackwards}} accepts {{offset}} as an argument but 
doesn't use it.  Is there supposed to be another counter for back-skipped 
bytes?  At the call site within {{S3AInputStream#seekInStream}}, the value it 
passes would be negative, so we'd need to be careful of that.


> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, 
> HADOOP-13028-branch-2-008.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-02 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-12101:

Attachment: HADOOP-12101.015.patch

- Fix variable reference

> Add automatic search of default Configuration variables to 
> TestConfigurationFieldsBase
> --
>
> Key: HADOOP-12101
> URL: https://issues.apache.org/jira/browse/HADOOP-12101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: HADOOP-12101.001.patch, HADOOP-12101.002.patch, 
> HADOOP-12101.003.patch, HADOOP-12101.004.patch, HADOOP-12101.005.patch, 
> HADOOP-12101.006.patch, HADOOP-12101.007.patch, HADOOP-12101.008.patch, 
> HADOOP-12101.009.patch, HADOOP-12101.010.patch, HADOOP-12101.011.patch, 
> HADOOP-12101.012.patch, HADOOP-12101.013.patch, HADOOP-12101.014.patch, 
> HADOOP-12101.015.patch
>
>
> Add functionality given a Configuration variable FOO, to at least check the 
> xml file value against DEFAULT_FOO.
> Without waivers and a mapping for exceptions, this can probably never be a 
> test method that generates actual errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10895) HTTP KerberosAuthenticator fallback should have a flag to disable it

2016-05-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267716#comment-15267716
 ] 

Andrew Wang commented on HADOOP-10895:
--

[~yzhangal] are you still interested in pursuing this change for 3.0? Was 
wondering if this is really a blocker.

> HTTP KerberosAuthenticator fallback should have a flag to disable it
> 
>
> Key: HADOOP-10895
> URL: https://issues.apache.org/jira/browse/HADOOP-10895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Yongjun Zhang
>Priority: Blocker
> Attachments: HADOOP-10895.001.patch, HADOOP-10895.002.patch, 
> HADOOP-10895.003.patch, HADOOP-10895.003v1.patch, HADOOP-10895.003v2.patch, 
> HADOOP-10895.003v2improved.patch, HADOOP-10895.004.patch, 
> HADOOP-10895.005.patch, HADOOP-10895.006.patch, HADOOP-10895.007.patch, 
> HADOOP-10895.008.patch, HADOOP-10895.009.patch
>
>
> Per review feedback in HADOOP-10771, {{KerberosAuthenticator}} and the 
> delegation token version coming in with HADOOP-10771 should have a flag to 
> disable fallback to pseudo, similarly to the one that was introduced in 
> Hadoop RPC client with HADOOP-9698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12868) hadoop-openstack's pom has missing and unused dependencies

2016-05-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267696#comment-15267696
 ] 

Andrew Wang commented on HADOOP-12868:
--

LGTM, though I'll note that my {{mvn dependency:analyze}} run also shows an 
unused hadoop-common:test-jar dependency:

{noformat}
[WARNING] Used undeclared dependencies found:
[WARNING]org.apache.hadoop:hadoop-annotations:jar:2.9.0-SNAPSHOT:compile
[WARNING]commons-logging:commons-logging:jar:1.1.3:compile
[WARNING] Unused declared dependencies found:
[WARNING]
org.apache.hadoop:hadoop-common:test-jar:tests:2.9.0-SNAPSHOT:compile
[WARNING]commons-io:commons-io:jar:2.4:compile
[WARNING]org.mockito:mockito-all:jar:1.8.5:provided
[WARNING]com.google.guava:guava:jar:11.0.2:test
{noformat}

Is this my env? I don't see any commits to hadoop-openstack between your 
comment and my run, so not sure why this would have changed.

FWIW I removed this dep and was still able to build successfully in the 
hadoop-openstack dir.

> hadoop-openstack's pom has missing and unused dependencies
> --
>
> Key: HADOOP-12868
> URL: https://issues.apache.org/jira/browse/HADOOP-12868
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12868.001.patch
>
>
> Attempting to compile openstack on a fairly fresh maven repo fails due to 
> commons-httpclient not being a declared dependency.  After that is fixed, 
> doing a maven dependency:analyze shows other problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-05-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HADOOP-12893:
--

Assignee: Xiao Chen

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-05-02 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267624#comment-15267624
 ] 

Kai Zheng commented on HADOOP-12756:


When provide the new revision, please also stick to the coding style, use the 
good patch name pattern and submit it (to trigger the Jenkins building). Thanks.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: 0001-OSS-filesystem-integration-with-Hadoop.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-02 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267604#comment-15267604
 ] 

Anu Engineer commented on HADOOP-12291:
---

bq. The thought behind leaving the option of using -1 was that some companies 
may have a deeply nested structure and do not mind the the cost of the lookups.

I do see the use case, but I am more worried that someone will have a slow 
LDAP/AD server and will cause a general slowdown of Namenode.

Also another issue that I see is that with infinite recursion we really have no 
control over time out, based on this patch, time out is per query. So in the 
infinite recursion scheme the time is number of times you recur multiplied by 
time out. At that point {{timeOut}} really has no meaning. As you pointed out, 
in the current scheme it is {{2 * timeOut}}. In your new scheme it will be 
{{max(Recur Depth, Configured Value) * timeOut}}. But in the infinite scheme, 
it is N * timeout where N is dependent on some values in AD. 

I am worried that support cost for such a feature would be too high, Also if we 
really need it, we know that with your patch it is an easy change to make.

bq. The DIRECTORY_SEARCH_TIMEOUT is a timeout set for each LDAP query.
That works very well since we know the MAX_UPPER bound for the query. So max 
time is maxDepth * time out. Would you care to document that with your 
settings? 

bq. I do not think you can make less LDAP queries. 
Thank you, good to know.

I am looking forward to your next patch.


> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-02 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267602#comment-15267602
 ] 

Masatake Iwasaki commented on HADOOP-12101:
---

Thanks for the update.

{noformat}
108 export yarnOutputFile="$(find "{dir}" -name 
org.apache.hadoop.yarn.conf.TestYarnConfigurationFields-output.txt)"
{noformat}

{dir} should be ${dir}


> Add automatic search of default Configuration variables to 
> TestConfigurationFieldsBase
> --
>
> Key: HADOOP-12101
> URL: https://issues.apache.org/jira/browse/HADOOP-12101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: HADOOP-12101.001.patch, HADOOP-12101.002.patch, 
> HADOOP-12101.003.patch, HADOOP-12101.004.patch, HADOOP-12101.005.patch, 
> HADOOP-12101.006.patch, HADOOP-12101.007.patch, HADOOP-12101.008.patch, 
> HADOOP-12101.009.patch, HADOOP-12101.010.patch, HADOOP-12101.011.patch, 
> HADOOP-12101.012.patch, HADOOP-12101.013.patch, HADOOP-12101.014.patch
>
>
> Add functionality given a Configuration variable FOO, to at least check the 
> xml file value against DEFAULT_FOO.
> Without waivers and a mapping for exceptions, this can probably never be a 
> test method that generates actual errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-05-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12345:
-
Priority: Critical  (was: Blocker)

I'm downgrading this from a blocker since it's not a regression.

I also spent a little time trying to make a repro. I'm not that familiar with 
the NFS gateway, and my IDE didn't see a place where mCredentialsLength was 
being used. I think this requires an actual NFS gateway unit test, which 
presumably uses this length somewhere in the RPC code.

This is one is probably best handled by one of the NFS gateway experts, e.g. 
[~brandonli].

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Priority: Critical
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-05-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13081:
---
Comment: was deleted

(was: 
This is an automatically generated message.

ran...@yahoo-inc.com is no longer with Yahoo! Inc.

Your message will not be forwarded.

If you have a sales inquiry, please email yahoosa...@yahoo-inc.com and someone 
will follow up with you shortly.

If you require assistance with a legal matter, please send a message to 
legal-noti...@yahoo-inc.com

Thank you!
)

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-05-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267553#comment-15267553
 ] 

Sergey Shelukhin commented on HADOOP-13081:
---

[~cnauroth] [~sseth] fyi

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-05-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HADOOP-13081:
--
Description: 
We have a scenario where we log in with kerberos as a certain user for some 
tasks, but also want to add tokens to the resulting UGI that would be specific 
to each task. We don't want to authenticate with kerberos for every task.
I am not sure how this can be accomplished with the existing UGI interface. 
Perhaps some clone method would be helpful, similar to createProxyUser minus 
the proxy stuff; or it could just relogin anew from ticket cache. 
getUGIFromTicketCache seems like the best option in existing code, but there 
doesn't appear to be a consistent way of handling ticket cache location - the 
above method, that I only see called in test, is using a config setting that is 
not used anywhere else, and the env variable for the location that is used in 
the main ticket cache related methods is not set uniformly on all paths - 
therefore, trying to find the correct ticket cache and passing it via the 
config setting to getUGIFromTicketCache seems even hackier than doing the clone 
via reflection ;) Moreover, getUGIFromTicketCache ignores the user parameter on 
the main path - it logs a warning for multiple principals and then logs in with 
first available.

  was:
We have a scenario where we log in with kerberos as a certain user for some 
tasks, but also want to add tokens to the resulting UGI that would be specific 
to each task. We don't want to authenticate with kerberos for every task.
I am not sure how this can be accomplished with the existing UGI interface. 
Perhaps some clone method would be helpful, similar to createProxyUser minus 
the proxy stuff; or it could just relogin anew from ticket cache. 
getUGIFromTicketCache seems like the best option in existing code, but there 
doesn't appear to be a consistent way of handling ticket cache location - the 
above method, that I only see called in test, is using a config setting that is 
not used anywhere else, and the env variable for the location is not set on all 
paths - trying to find the correct ticket cache and setting it in the config 
for getUGIFromTicketCache seems even hackier than doing the clone via 
reflection ;)


> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-05-02 Thread Yahoo! No Reply (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267549#comment-15267549
 ] 

Yahoo! No Reply commented on HADOOP-13081:
--


This is an automatically generated message.

ran...@yahoo-inc.com is no longer with Yahoo! Inc.

Your message will not be forwarded.

If you have a sales inquiry, please email yahoosa...@yahoo-inc.com and someone 
will follow up with you shortly.

If you require assistance with a legal matter, please send a message to 
legal-noti...@yahoo-inc.com

Thank you!


> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location is not set 
> on all paths - trying to find the correct ticket cache and setting it in the 
> config for getUGIFromTicketCache seems even hackier than doing the clone via 
> reflection ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-05-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HADOOP-13081:
--
Description: 
We have a scenario where we log in with kerberos as a certain user for some 
tasks, but also want to add tokens to the resulting UGI that would be specific 
to each task. We don't want to authenticate with kerberos for every task.
I am not sure how this can be accomplished with the existing UGI interface. 
Perhaps some clone method would be helpful, similar to createProxyUser minus 
the proxy stuff; or it could just relogin anew from ticket cache. 
getUGIFromTicketCache seems like the best option in existing code, but there 
doesn't appear to be a consistent way of handling ticket cache location - the 
above method, that I only see called in test, is using a config setting that is 
not used anywhere else, and the env variable for the location is not set on all 
paths - trying to find the correct ticket cache and setting it in the config 
for getUGIFromTicketCache seems even hackier than doing the clone via 
reflection ;)

  was:
We have a scenario where we log in with kerberos as a certain user for some 
tasks, but also want to add tokens to the resulting UGI that would be specific 
to each task. 
I am not sure how this can be accomplished with the existing UGI interface. 
Perhaps some clone method would be helpful, similar to createProxyUser minus 
the proxy stuff; or it could just relogin anew from ticket cache. 
getUGIFromTicketCache seems like the best option in existing code, but there 
doesn't appear to be a consistent way of handling ticket cache location - the 
above method, that I only see called in test, is using a config setting that is 
not used anywhere else, and the env variable for the location is not set on all 
paths - trying to find the correct ticket cache and setting it in the config 
for getUGIFromTicketCache seems even hackier than doing the clone via 
reflection ;)


> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location is not set 
> on all paths - trying to find the correct ticket cache and setting it in the 
> config for getUGIFromTicketCache seems even hackier than doing the clone via 
> reflection ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-05-02 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HADOOP-13081:
-

 Summary: add the ability to create multiple UGIs/subjects from one 
kerberos login
 Key: HADOOP-13081
 URL: https://issues.apache.org/jira/browse/HADOOP-13081
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sergey Shelukhin


We have a scenario where we log in with kerberos as a certain user for some 
tasks, but also want to add tokens to the resulting UGI that would be specific 
to each task. 
I am not sure how this can be accomplished with the existing UGI interface. 
Perhaps some clone method would be helpful, similar to createProxyUser minus 
the proxy stuff; or it could just relogin anew from ticket cache. 
getUGIFromTicketCache seems like the best option in existing code, but there 
doesn't appear to be a consistent way of handling ticket cache location - the 
above method, that I only see called in test, is using a config setting that is 
not used anywhere else, and the env variable for the location is not set on all 
paths - trying to find the correct ticket cache and setting it in the config 
for getUGIFromTicketCache seems even hackier than doing the clone via 
reflection ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13080:
---
Comment: was deleted

(was: +1 lgtm. Agree this doesn't require a unit test; ran the TestSysInfo* 
tests, they passed on trunk/branch-2/branch-2.8.

Will commit when Jenkins comes back.)

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267541#comment-15267541
 ] 

Chris Douglas commented on HADOOP-13080:


+1 lgtm. Agree this doesn't require a unit test; ran the TestSysInfo* tests, 
they passed on trunk/branch-2/branch-2.8.

Will commit when Jenkins comes back.

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267540#comment-15267540
 ] 

Chris Douglas commented on HADOOP-13080:


+1 lgtm. Agree this doesn't require a unit test; ran the TestSysInfo* tests, 
they passed on trunk/branch-2/branch-2.8.

Will commit when Jenkins comes back.

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-05-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267537#comment-15267537
 ] 

Xiao Chen commented on HADOOP-12893:


Update:
So far we've gotten a consolidated list of LICENSE and NOTICE. Will be working 
on finalizing it, and also try to copy it into our JAR files.

Thanks [~andrew.wang] and [~ajisakaa] for working on this together!

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267526#comment-15267526
 ] 

Inigo Goiri commented on HADOOP-13080:
--

[~chris.douglas], [~kasha] as you guys were involved in HADOOP-12180, do you 
mind taking a look?
Not sure if there's a point on adding a unit test for this as it wouldn't run 
on Windows.

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-05-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267501#comment-15267501
 ] 

Wei-Chiu Chuang commented on HADOOP-12782:
--

The test failure is unrelated.

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12782.001.patch, HADOOP-12782.002.patch, 
> HADOOP-12782.003.patch, HADOOP-12782.004.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13080:
-
Component/s: util

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13035) Add states INITING and STARTING to YARN Service model to cover in-transition states.

2016-05-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267432#comment-15267432
 ] 

Wangda Tan commented on HADOOP-13035:
-

Hi [~ste...@apache.org],

Thanks for sharing background and thoughts about this issue.

Given this looks like a fundamental change. And could possibly cause other 
issues for example:
bq. the fact that calling start() on a service which is started or in the 
process of starting is required to be a no-op.

I think adding:
bq. a transitionInProgress variable and accessor
Sounds like a good plan if we add it to AbstractService, add the 
transitionInProgress to following blocks:
{code}
synchronized (stateChangeLock) {
  if (enterState(STATE.INITED) != STATE.INITED) {
 // set in-progress-flag
 // other logics
 // unset in-progress-flag
  }
...
{code}

Since every state transition needs to acquire stateChangeLock, it seems safe to 
me when stop() is called when start() is invoking or call start() when init() 
is invoking, etc.

Adding an ExtendedService is a more comprehensive fix, and I agree that it is 
doable, but we have to fix all existing subclasses of AbstractService to use 
that.

I think a simpler but incompatible fix is rename existing STARTED/INITED to 
STARTING/INITING, and adding a new STARTED/INITED to Service. Probably we can 
do that on trunk before Hadoop-3 get released. Could you please share your 
thoughts?

Thanks,

> Add states INITING and STARTING to YARN Service model to cover in-transition 
> states.
> 
>
> Key: HADOOP-13035
> URL: https://issues.apache.org/jira/browse/HADOOP-13035
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
> Attachments: 0001-HADOOP-13035.patch, 0002-HADOOP-13035.patch, 
> 0003-HADOOP-13035.patch
>
>
> As per the discussion in YARN-3971 the we should be setting the service state 
> to STARTED only after serviceStart() 
> Currently {{AbstractService#start()}} is set
> {noformat} 
>  if (stateModel.enterState(STATE.STARTED) != STATE.STARTED) {
> try {
>   startTime = System.currentTimeMillis();
>   serviceStart();
> ..
>  }
> {noformat}
> enterState sets the service state to proposed state. So in 
> {{service.getServiceState}} in {{serviceStart()}} will return STARTED .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267400#comment-15267400
 ] 

Chris Nauroth commented on HADOOP-13028:


Hello [~ste...@apache.org].  I'm still digging into the changes in the seek 
code and the tests, but I'd like to share the feedback I have so far for patch 
v008.

# Let's put visibility annotations on {{MetricsRecordBuilder}}.  
Public/Evolving for agreement with the {{MetricsRecordBuilder}} base class?
# Should {{initMultipartUploads}} increment the ignored error counter?
# {{rename}} has several {{catch}} blocks that don't propagate an exception.  
Should these increment the ignored error counter?
# In the following exception message, I think we need an extra space before the 
"to".  There are 2 different call sites that produce this message, so 2 spots 
to fix.
{code}
  throw new InterruptedIOException("Interrupted copying " + src
  + "to "  + dst + ", cancelling");
{code}
# Can you please include {{src}} and {{dst}} in this log message from 
{{rename}}?
{code}
  LOG.debug("rename: src or dst are empty");
{code}
# Can you please include {{key}} in this log message from {{delete}}?
{code}
LOG.debug("Deleting fake empty directory");
{code}
# Should {{S3AFileSystem#toString}} also include {{maxKeys}}, {{cannedACL}} and 
{{readAhead}}?
# In {{S3AInputStream#reopen}}, is the following log message redundant, 
considering the call to {{closeStream}} will do its own logging?
{code}
  LOG.debug("Closing the previous stream");
  closeStream("reopen(" + reason + ")", requestedStreamLen);
{code}
# {{S3AInputStream#setReadahead}} doesn't exactly match the specification 
defined in {{CanSetReadahead}}.  The interface says that {{null}} means to use 
the default, but the implementation here rejects {{null}}.  This could be 
problematic for more complex use cases, such as someone wanting to 
programmatically control the amount of readahead.  If they called 
{{setReadahead}} with a custom value, then I think ideally we should allow them 
to call it with {{null}} later, and restore back to the default from 
configuration.  (I admit this is an edge case, but a {{DFSInputStream}} does 
allow this behavior.)
# {{S3AInstrumentation}} receives a {{Configuration}} in its constructor but 
doesn't use it.  Can it be removed?
# {{S3AInstrumentation#gauge}} appears to be unused.
# {{InputStreamStatistics#toString}} does not include {{readFullyOperations}}.

It looks like there are some CheckStyle and JavaDoc things to follow up on from 
that last pre-commit run.  The test failure is unrelated.


> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, 
> HADOOP-13028-branch-2-008.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13068) Clean up RunJar and related test class

2016-05-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267385#comment-15267385
 ] 

Arpit Agarwal edited comment on HADOOP-13068 at 5/2/16 8:02 PM:


Hi [~boky01], _hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml_ 
is correct. IIUC checkstyle only flags new issues introduced by your patch.

The RunJar's package directory has package.html. You see a warning in your run 
because checkstyle.xml is not configured to allow it. See 
http://checkstyle.sourceforge.net/config_javadoc.html#JavadocPackage.

I think you can just ignore this error, we use package.html all over the place.


was (Author: arpitagarwal):
Hi [~boky01], that is the correct 
_hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml_. IIUC 
checkstyle only flags new issues introduced by your patch.

The RunJar's package directory has package.html. You see a warning in your run 
because checkstyle.xml is not configured to allow it. See 
http://checkstyle.sourceforge.net/config_javadoc.html#JavadocPackage.

I think you can just ignore this error, we use package.html all over the place.

> Clean up RunJar and related test class
> --
>
> Key: HADOOP-13068
> URL: https://issues.apache.org/jira/browse/HADOOP-13068
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.7.2
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-13068.01.patch, HADOOP-13068.02.patch, 
> HADOOP-13068.03.patch
>
>
> Clean up RunJar and related test class to remove IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13068) Clean up RunJar and related test class

2016-05-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267385#comment-15267385
 ] 

Arpit Agarwal commented on HADOOP-13068:


Hi [~boky01], that is the correct 
_hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml_. IIUC 
checkstyle only flags new issues introduced by your patch.

The RunJar's package directory has package.html. You see a warning in your run 
because checkstyle.xml is not configured to allow it. See 
http://checkstyle.sourceforge.net/config_javadoc.html#JavadocPackage.

I think you can just ignore this error, we use package.html all over the place.

> Clean up RunJar and related test class
> --
>
> Key: HADOOP-13068
> URL: https://issues.apache.org/jira/browse/HADOOP-13068
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.7.2
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-13068.01.patch, HADOOP-13068.02.patch, 
> HADOOP-13068.03.patch
>
>
> Clean up RunJar and related test class to remove IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls

2016-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267359#comment-15267359
 ] 

Hudson commented on HADOOP-12957:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9701 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9701/])
HADOOP-12957. Limit the number of outstanding async calls.  Contributed 
(szetszwo: rev 1b9f18623ab55507bea94888317c7d63d0f4a6f2)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAsyncDFSRename.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/AsyncCallLimitExceededException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/AsyncDistributedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestAsyncIPC.java


> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch, HADOOP-12957.001.patch, HADOOP-12957.002.patch, 
> HADOOP-12957.003.patch, HADOOP-12957.004.patch, HADOOP-12957.005.patch, 
> HADOOP-12957.006.patch, HADOOP-12957.007.patch, HADOOP-12957.008.patch, 
> HADOOP-12957.009.patch, HADOOP-12957.010.patch, HADOOP-12957.011.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13080:
-
Status: Patch Available  (was: Open)

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13080:
-
Affects Version/s: 2.8.0

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13080:
-
Attachment: HADOOP-13080-v0.patch

First version of the patch to move ns into ms.

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13080-v0.patch
>
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267263#comment-15267263
 ] 

Inigo Goiri commented on HADOOP-13080:
--

When moving {{WindowsResourceCalculatorPlugin}} from YARN to Commons, the way 
to get the time changed. My proposal is to use {{Time.monotonicNow()}} which 
already does the translation from ns to ms.

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Yahoo! No Reply (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267262#comment-15267262
 ] 

Yahoo! No Reply commented on HADOOP-13080:
--


This is an automatically generated message.

ran...@yahoo-inc.com is no longer with Yahoo! Inc.

Your message will not be forwarded.

If you have a sales inquiry, please email yahoosa...@yahoo-inc.com and someone 
will follow up with you shortly.

If you require assistance with a legal matter, please send a message to 
legal-noti...@yahoo-inc.com

Thank you!


> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13080) Refresh time in SysInfoWindows is in nanoseconds

2016-05-02 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13080:
-
Summary: Refresh time in SysInfoWindows is in nanoseconds  (was: Refresh 
time in SysInfoWindows is in nanonseconds)

> Refresh time in SysInfoWindows is in nanoseconds
> 
>
> Key: HADOOP-13080
> URL: https://issues.apache.org/jira/browse/HADOOP-13080
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>
> SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13080) Refresh time in SysInfoWindows is in nanonseconds

2016-05-02 Thread Inigo Goiri (JIRA)
Inigo Goiri created HADOOP-13080:


 Summary: Refresh time in SysInfoWindows is in nanonseconds
 Key: HADOOP-13080
 URL: https://issues.apache.org/jira/browse/HADOOP-13080
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Inigo Goiri
Assignee: Inigo Goiri


SysInfoWindows gets the current time in nanonseconds but assumes milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-02 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-12101:

Attachment: HADOOP-12101.014.patch

- Fix shellcheck issues and fix missing pasted code.

> Add automatic search of default Configuration variables to 
> TestConfigurationFieldsBase
> --
>
> Key: HADOOP-12101
> URL: https://issues.apache.org/jira/browse/HADOOP-12101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: HADOOP-12101.001.patch, HADOOP-12101.002.patch, 
> HADOOP-12101.003.patch, HADOOP-12101.004.patch, HADOOP-12101.005.patch, 
> HADOOP-12101.006.patch, HADOOP-12101.007.patch, HADOOP-12101.008.patch, 
> HADOOP-12101.009.patch, HADOOP-12101.010.patch, HADOOP-12101.011.patch, 
> HADOOP-12101.012.patch, HADOOP-12101.013.patch, HADOOP-12101.014.patch
>
>
> Add functionality given a Configuration variable FOO, to at least check the 
> xml file value against DEFAULT_FOO.
> Without waivers and a mapping for exceptions, this can probably never be a 
> test method that generates actual errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls

2016-05-02 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267220#comment-15267220
 ] 

Xiaobing Zhou commented on HADOOP-12957:


Thank you [~szetszwo] for the long list of reviews!

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch, HADOOP-12957.001.patch, HADOOP-12957.002.patch, 
> HADOOP-12957.003.patch, HADOOP-12957.004.patch, HADOOP-12957.005.patch, 
> HADOOP-12957.006.patch, HADOOP-12957.007.patch, HADOOP-12957.008.patch, 
> HADOOP-12957.009.patch, HADOOP-12957.010.patch, HADOOP-12957.011.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12957) Limit the number of outstanding async calls

2016-05-02 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-12957:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Xiaobing!

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch, HADOOP-12957.001.patch, HADOOP-12957.002.patch, 
> HADOOP-12957.003.patch, HADOOP-12957.004.patch, HADOOP-12957.005.patch, 
> HADOOP-12957.006.patch, HADOOP-12957.007.patch, HADOOP-12957.008.patch, 
> HADOOP-12957.009.patch, HADOOP-12957.010.patch, HADOOP-12957.011.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267191#comment-15267191
 ] 

Hadoop QA commented on HADOOP-12101:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 11s 
{color} | {color:red} The patch generated 3 new + 96 unchanged - 0 fixed = 99 
total (was 96) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 40s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 0s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 53s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 

[jira] [Updated] (HADOOP-12957) Limit the number of outstanding async calls

2016-05-02 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-12957:
-

+1 the 011 patch looks good.

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch, HADOOP-12957.001.patch, HADOOP-12957.002.patch, 
> HADOOP-12957.003.patch, HADOOP-12957.004.patch, HADOOP-12957.005.patch, 
> HADOOP-12957.006.patch, HADOOP-12957.007.patch, HADOOP-12957.008.patch, 
> HADOOP-12957.009.patch, HADOOP-12957.010.patch, HADOOP-12957.011.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-02 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267111#comment-15267111
 ] 

John Zhuge commented on HADOOP-13079:
-

Essentially interactive sessions with stdin redirected.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-02 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267036#comment-15267036
 ] 

John Zhuge commented on HADOOP-13079:
-

I only come up with these 2 cases:
* echo dir1 dir2 | xargs hadoop fs -ls
* hadoop fs -ls dir1  Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13072) WindowsGetSpaceUsed constructor should be public

2016-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267037#comment-15267037
 ] 

Hudson commented on HADOOP-13072:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9700 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9700/])
HADOOP-13072. WindowsGetSpaceUsed constructor should be public (cmccabe: rev 
2beedead72ee9efb69218aaf587de585158d6a1c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/WindowsGetSpaceUsed.java


> WindowsGetSpaceUsed constructor should be public
> 
>
> Key: HADOOP-13072
> URL: https://issues.apache.org/jira/browse/HADOOP-13072
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>  Labels: windows
> Fix For: 2.8.0
>
> Attachments: HADOOP-13072-01.patch, HADOOP-13072-02.patch
>
>
> WindowsGetSpaceUsed constructor should be made public.
> Otherwise building using builder will not work.
> {noformat}2016-04-29 12:49:37,455 [Thread-108] WARN  fs.GetSpaceUsed$Builder 
> (GetSpaceUsed.java:build(127)) - Doesn't look like the class class 
> org.apache.hadoop.fs.WindowsGetSpaceUsed have the needed constructor
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.fs.WindowsGetSpaceUsed.(org.apache.hadoop.fs.GetSpaceUsed$Builder)
>   at java.lang.Class.getConstructor0(Unknown Source)
>   at java.lang.Class.getConstructor(Unknown Source)
>   at 
> org.apache.hadoop.fs.GetSpaceUsed$Builder.build(GetSpaceUsed.java:118)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.(BlockPoolSlice.java:165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:915)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:907)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:413)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-05-02 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-12101:

Attachment: HADOOP-12101.013.patch

- Fixes for verify-xml.sh script

> Add automatic search of default Configuration variables to 
> TestConfigurationFieldsBase
> --
>
> Key: HADOOP-12101
> URL: https://issues.apache.org/jira/browse/HADOOP-12101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: HADOOP-12101.001.patch, HADOOP-12101.002.patch, 
> HADOOP-12101.003.patch, HADOOP-12101.004.patch, HADOOP-12101.005.patch, 
> HADOOP-12101.006.patch, HADOOP-12101.007.patch, HADOOP-12101.008.patch, 
> HADOOP-12101.009.patch, HADOOP-12101.010.patch, HADOOP-12101.011.patch, 
> HADOOP-12101.012.patch, HADOOP-12101.013.patch
>
>
> Add functionality given a Configuration variable FOO, to at least check the 
> xml file value against DEFAULT_FOO.
> Without waivers and a mapping for exceptions, this can probably never be a 
> test method that generates actual errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-05-02 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267015#comment-15267015
 ] 

Brahma Reddy Battula commented on HADOOP-12892:
---

Raised HDFS-10353 to fix this compilation error in windows..

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-02 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266989#comment-15266989
 ] 

Colin Patrick McCabe commented on HADOOP-13079:
---

bq. No way should -q be the default under any circumstances. That is extremely 
surprising behavior that will definitely break stuff.

It's not surprising, because it matches the traditional UNIX / Linux behavior.  
In Linux, {{/bin/ls}} will not print control characters by default.  you must 
pass the {{--show-control-characters}} option in order to see them.  From the 
man page:

{code}
   --show-control-chars
  show non graphic characters as-is (default unless program is 'ls' 
and output is a terminal)
{code}

{{ls}} blasting raw control characters into an interactive terminal is a very 
bad idea.  It leads to some very serious security vulnerabilities because 
commonly used software like {{xterm}}, {{GNU screen}}, {{tmux}} and so forth 
interpret control characters.  Using control characters, you can convince these 
pieces of software to execute arbitrary code.  See 
http://marc.info/?l=bugtraq=104612710031920=p3 and 
https://www.proteansec.com/linux/blast-past-executing-code-terminal-emulators-via-escape-sequences/
  There are even CVEs for some of these issues.

We should make the default opt-in for printing control characters in our next 
compatibility-breaking release (Hadoop 3.x).

bq. In C, isatty(STDOUT_FILENO) is used to find out whether the output is a 
terminal. Since Java doesn't have isatty, I will use JNI to call C isatty() 
because the closest test System.console() == null does not work in some cases.

It would really be nice if we could determine this without using JNI, because 
it's often not available.  Under what conditions does the {{System.console() == 
null}} check not work?  The only case I was able to find in a quick Google 
search was inside an eclipse console.  That seems like a case where the 
security issues would not be a concern, because it's a debugging environment.  
Are there other cases where the non-JNI check would fail?

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13072) WindowsGetSpaceUsed constructor should be public

2016-05-02 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-13072:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> WindowsGetSpaceUsed constructor should be public
> 
>
> Key: HADOOP-13072
> URL: https://issues.apache.org/jira/browse/HADOOP-13072
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>  Labels: windows
> Fix For: 2.8.0
>
> Attachments: HADOOP-13072-01.patch, HADOOP-13072-02.patch
>
>
> WindowsGetSpaceUsed constructor should be made public.
> Otherwise building using builder will not work.
> {noformat}2016-04-29 12:49:37,455 [Thread-108] WARN  fs.GetSpaceUsed$Builder 
> (GetSpaceUsed.java:build(127)) - Doesn't look like the class class 
> org.apache.hadoop.fs.WindowsGetSpaceUsed have the needed constructor
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.fs.WindowsGetSpaceUsed.(org.apache.hadoop.fs.GetSpaceUsed$Builder)
>   at java.lang.Class.getConstructor0(Unknown Source)
>   at java.lang.Class.getConstructor(Unknown Source)
>   at 
> org.apache.hadoop.fs.GetSpaceUsed$Builder.build(GetSpaceUsed.java:118)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.(BlockPoolSlice.java:165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:915)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:907)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:413)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-05-02 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266945#comment-15266945
 ] 

Brahma Reddy Battula commented on HADOOP-12892:
---

[~busbey] thanks for calrification..
 
I thought based on commit date, git log will be generated ( even in the Jira, 
Filed : 
[Development|https://issues.apache.org/jira/browse/HADOOP-12892?devStatusDetailDialog=repository]
 ) which makes little confusion atleast to me..
{noformat}
$ git log --grep HADOOP-12892
commit 7b1c37a13a55dc184f1b64439b9928b53e352ee7
Author: Allen Wittenauer 
Date:   Fri Mar 4 15:42:04 2016 -0800

HADOOP-12892. fix/rewrite create-release (aw)

{noformat}

bytheway I came to know {{--format=fuller}} now.:).

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13072) WindowsGetSpaceUsed constructor should be public

2016-05-02 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266874#comment-15266874
 ] 

Colin Patrick McCabe commented on HADOOP-13072:
---

+1.  Thanks, [~vinayrpet].

> WindowsGetSpaceUsed constructor should be public
> 
>
> Key: HADOOP-13072
> URL: https://issues.apache.org/jira/browse/HADOOP-13072
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>  Labels: windows
> Attachments: HADOOP-13072-01.patch, HADOOP-13072-02.patch
>
>
> WindowsGetSpaceUsed constructor should be made public.
> Otherwise building using builder will not work.
> {noformat}2016-04-29 12:49:37,455 [Thread-108] WARN  fs.GetSpaceUsed$Builder 
> (GetSpaceUsed.java:build(127)) - Doesn't look like the class class 
> org.apache.hadoop.fs.WindowsGetSpaceUsed have the needed constructor
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.fs.WindowsGetSpaceUsed.(org.apache.hadoop.fs.GetSpaceUsed$Builder)
>   at java.lang.Class.getConstructor0(Unknown Source)
>   at java.lang.Class.getConstructor(Unknown Source)
>   at 
> org.apache.hadoop.fs.GetSpaceUsed$Builder.build(GetSpaceUsed.java:118)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.(BlockPoolSlice.java:165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:915)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:907)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:413)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-05-02 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266828#comment-15266828
 ] 

Sean Busbey commented on HADOOP-12892:
--

And the canonical commit date you should look at is the message to the 
common-commits@hadoop mailing list:

https://s.apache.org/hadoop-commit-7b1c37a13a55dc184f1b64439b9928b53e352ee7

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-05-02 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266819#comment-15266819
 ] 

Sean Busbey commented on HADOOP-12892:
--

that is the author date, and presumably when Allen first committed work related 
to this fix. If you look at the full commit you'll see the commit date is in 
fact on the 27th of April:

{code}
$ git log --format=fuller --grep HADOOP-12892
commit 7b1c37a13a55dc184f1b64439b9928b53e352ee7
Author: Allen Wittenauer 
AuthorDate: Fri Mar 4 15:42:04 2016 -0800
Commit: Allen Wittenauer 
CommitDate: Wed Apr 27 08:38:22 2016 -0700

HADOOP-12892. fix/rewrite create-release (aw)
{code}

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13079:
--
Comment: was deleted

(was: 
This is an automatically generated message.

ran...@yahoo-inc.com is no longer with Yahoo! Inc.

Your message will not be forwarded.

If you have a sales inquiry, please email yahoosa...@yahoo-inc.com and someone 
will follow up with you shortly.

If you require assistance with a legal matter, please send a message to 
legal-noti...@yahoo-inc.com

Thank you!
)

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266799#comment-15266799
 ] 

Allen Wittenauer commented on HADOOP-13079:
---

No way should -q be the default under any circumstances.  That is *extremely* 
surprising behavior that will definitely break stuff.

> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-02 Thread Esther Kundin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266783#comment-15266783
 ] 

Esther Kundin commented on HADOOP-12291:


Thank you for the comments.  I am working on some of the fixes.

The  thought behind leaving the option of using -1 was that some companies may 
have a deeply nested structure and do not mind the the cost of the lookups.  We 
thought this would be the most flexible way of building the solution, and as 
the default is set appropriately, most people would not be impacted in any 
case.  Do you feel strongly that the -1 option for infinite recursion should be 
removed?

For your point 2, The DIRECTORY_SEARCH_TIMEOUT is a timeout set for each LDAP 
query.  We are not changing the semantics of the current code, as it currently 
does 2 calls - one for the user and one for the group - and each of those calls 
will have the full timeout set.  We are raising the number of calls, but the 
semantics are still the same, with the timeout being on a per-call basis.

For your point 7, I do not think you can make less LDAP queries.  You will 
always need at least one, in order to leave the original group lookup and the 
if check will take care of subsequent calls. I can add an extra check right at 
the start of goUpGroupHierarchy.  This will prevent an extra query if the 
function is called incorrectly.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-02 Thread Yahoo! No Reply (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266782#comment-15266782
 ] 

Yahoo! No Reply commented on HADOOP-13079:
--


This is an automatically generated message.

ran...@yahoo-inc.com is no longer with Yahoo! Inc.

Your message will not be forwarded.

If you have a sales inquiry, please email yahoosa...@yahoo-inc.com and someone 
will follow up with you shortly.

If you require assistance with a legal matter, please send a message to 
legal-noti...@yahoo-inc.com

Thank you!


> Add -q to fs -ls to print non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13079) Add -q to fs -ls to print non-printable characters

2016-05-02 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13079:
---

 Summary: Add -q to fs -ls to print non-printable characters
 Key: HADOOP-13079
 URL: https://issues.apache.org/jira/browse/HADOOP-13079
 Project: Hadoop Common
  Issue Type: Bug
Reporter: John Zhuge
Assignee: John Zhuge


Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
Non-printable characters are defined by 
[isprint(3)|http://linux.die.net/man/3/isprint] according to the current locale.

Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
the difference in these 2 command lines:
* {{hadoop fs -ls /dir}}
* {{hadoop fs -ls /dir | od -c}}

In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
{{isatty()}} because the closest test {{System.console() == null}} does not 
work in some cases.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13046) Fix hadoop-dist to adapt to HDFS client library separation

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266439#comment-15266439
 ] 

Hadoop QA commented on HADOOP-13046:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-13046 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801719/bigtop.diff |
| JIRA Issue | HADOOP-13046 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9250/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix hadoop-dist to adapt to HDFS client library separation
> --
>
> Key: HADOOP-13046
> URL: https://issues.apache.org/jira/browse/HADOOP-13046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Teruyoshi Zenmyo
>Assignee: Teruyoshi Zenmyo
> Attachments: HADOOP-13046.patch, bigtop.diff
>
>
> Some build-related files should be updated to adapt to HDFS client library 
> separation. There exist below issues.
> - hdfs.h is not included.
> - hadoop.component is not set in pom.xml of hdfs client libraries.
> - hdfs-native-client is not include



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13046) Fix hadoop-dist to adapt to HDFS client library separation

2016-05-02 Thread Teruyoshi Zenmyo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teruyoshi Zenmyo updated HADOOP-13046:
--
Attachment: bigtop.diff

changes on Bigtop to build hadoop-3.0.0 RPMs.
(conf-pseudo module is omitted due to a directory mode issue)

> Fix hadoop-dist to adapt to HDFS client library separation
> --
>
> Key: HADOOP-13046
> URL: https://issues.apache.org/jira/browse/HADOOP-13046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Teruyoshi Zenmyo
>Assignee: Teruyoshi Zenmyo
> Attachments: HADOOP-13046.patch, bigtop.diff
>
>
> Some build-related files should be updated to adapt to HDFS client library 
> separation. There exist below issues.
> - hdfs.h is not included.
> - hadoop.component is not set in pom.xml of hdfs client libraries.
> - hdfs-native-client is not include



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13046) Fix hadoop-dist to adapt to HDFS client library separation

2016-05-02 Thread Teruyoshi Zenmyo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266432#comment-15266432
 ] 

Teruyoshi Zenmyo commented on HADOOP-13046:
---

Thanks for the response, [~wheat9].
I have checked as follows.
- build packages by `mvn package -Pdist -Pnative -DskipTests` and checked the 
difference of the hadoop-dsit package layout. (show in above comment)
- build RPMs using Apache Bigtop (with some modification).
- install the RPMs to a small cluster on virtual machines and check simple HDFS 
operations (get/put/ls).

> Fix hadoop-dist to adapt to HDFS client library separation
> --
>
> Key: HADOOP-13046
> URL: https://issues.apache.org/jira/browse/HADOOP-13046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Teruyoshi Zenmyo
>Assignee: Teruyoshi Zenmyo
> Attachments: HADOOP-13046.patch
>
>
> Some build-related files should be updated to adapt to HDFS client library 
> separation. There exist below issues.
> - hdfs.h is not included.
> - hadoop.component is not set in pom.xml of hdfs client libraries.
> - hdfs-native-client is not include



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13035) Add states INITING and STARTING to YARN Service model to cover in-transition states.

2016-05-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266325#comment-15266325
 ] 

Steve Loughran commented on HADOOP-13035:
-

Thinking about this more, it is possible to make the fact that a service is in 
a transition state without changing {{Service.STATE}}, which is the 
incompatibility barrier.


What is needed is to retain that state model to all existing code, while either 
making the state-transition-in-progress condition visible to code
which is aware of the fact that the intermediate states are visible. 

This can be done in a number of ways

h3. a {{transitionInProgress}} variable and accessor

When state is entered, the (atomicBoolean) state is set, cleared on exit. The 
state of a service can be queried with something like 
{{isInState(STATE.STARTED) && !transitionInProgress}}. 

Troubespots here are that such a probe is not in itself atomic except if 
executed in a {{synchronized block}}, which will also be needed when entering 
the state. The extra fun happens when stop() is called during a change, or, say 
{{start() in init()}}. There are some other corners too: what if the flag is 
set, but the service is in state {{NOTINITED}}?

h3. Implement the extended state as a new enum, translate down to the existing 
state for existing code in the getters and state probes.


Here there'd be a new interface

{code}

interface ExtendedService extends Service {

enum ExtendedState{ ... = Service.state + transitions}

boolean getExtendedState();

boolean isInExtendedState(ExtendedState es);

boolean canEnterExtendedState(ExtendedState es)

}
{code}

The state model would be the extended one, what would change is the old state 
queries

{code}
boolean isInState(Service.State s) {
   ExtendedState es = getExtendedState();
   return mapExtendedToSimpleState(es) == s
}

public State mapExtendedToSimpleState(ExtendedState es) {
switch(es) {
  case NOTINITED: return State.NOTINITED;
  case INITED: return State.INITED;
  case INITING: return State.INITED;
  ...
}
 
}

{code}

Actually, you could do the map in the enum itself, with every ExtendedState 
instance declaring is simple state in the constructor.


I *believe* this could work, the troublespot would be managing state entry 
calls, including {{enterState(State.STARTED)}}, and the addition of child 
services within a composite service during the {{serviceInit()}} and 
{{serviceStart()}} operations.

If this can be shown to implement the extended state model desired and retain 
backwards compatibility with subclasses of {{AbstractService}} inside and 
outside the Hadoop codebase then I'm prepared to withdraw my -1. I do still 
require the linked JIRAs to go in first, as they push the boundaries of the 
state model further, and, being derivative of the slider workflow services and 
service launcher, something we could migrate that code to. That is: if the 
YARN-679 and YARN-1564 handle this, then testing slider becomes a lot easier.

> Add states INITING and STARTING to YARN Service model to cover in-transition 
> states.
> 
>
> Key: HADOOP-13035
> URL: https://issues.apache.org/jira/browse/HADOOP-13035
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
> Attachments: 0001-HADOOP-13035.patch, 0002-HADOOP-13035.patch, 
> 0003-HADOOP-13035.patch
>
>
> As per the discussion in YARN-3971 the we should be setting the service state 
> to STARTED only after serviceStart() 
> Currently {{AbstractService#start()}} is set
> {noformat} 
>  if (stateModel.enterState(STATE.STARTED) != STATE.STARTED) {
> try {
>   startTime = System.currentTimeMillis();
>   serviceStart();
> ..
>  }
> {noformat}
> enterState sets the service state to proposed state. So in 
> {{service.getServiceState}} in {{serviceStart()}} will return STARTED .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266290#comment-15266290
 ] 

Hadoop QA commented on HADOOP-10392:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 39s 
{color} | {color:green} root-jdk1.8.0_92 with JDK v1.8.0_92 generated 0 new + 
706 unchanged - 34 fixed = 706 total (was 740) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s 
{color} | {color:green} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 0 new + 
703 unchanged - 33 fixed = 703 total (was 736) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 14s 
{color} | {color:red} root: The patch generated 1 new + 714 unchanged - 5 fixed 
= 715 total (was 719) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 49s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 103m 18s 
{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed 
with JDK v1.8.0_92. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 2s 
{color} | {color:green} hadoop-streaming in the patch passed with JDK 
v1.8.0_92. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-archives in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 

[jira] [Updated] (HADOOP-13073) RawLocalFileSystem does not react on changing umask

2016-05-02 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13073:
--
Attachment: HADOOP-13073.01.patch

[~ste...@apache.org] [~mattf] [~arpitagarwal]
Could you please review my patch?

> RawLocalFileSystem does not react on changing umask
> ---
>
> Key: HADOOP-13073
> URL: https://issues.apache.org/jira/browse/HADOOP-13073
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-13073.01.patch
>
>
> FileSystemContractBaseTest#testMkdirsWithUmask is changing umask under the 
> filesystem. RawLocalFileSystem reads the config on startup so it will not 
> react if we change the umask.
> It blocks [HADOOP-7363|https://issues.apache.org/jira/browse/HADOOP-7363] 
> since testMkdirsWithUmask test will never work with RawLocalFileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13072) WindowsGetSpaceUsed constructor should be public

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266218#comment-15266218
 ] 

Hadoop QA commented on HADOOP-13072:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 7s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 9s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 35s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801696/HADOOP-13072-02.patch 
|
| JIRA Issue | HADOOP-13072 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9d2d5309972a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-05-02 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266210#comment-15266210
 ] 

Brahma Reddy Battula commented on HADOOP-12892:
---

Commit date given as " *Date: 05-03-2016 05:12:04* ". I think, it should be 
April-27th,2016.

{noformat}
Revision: 7b1c37a13a55dc184f1b64439b9928b53e352ee7
Author: Allen Wittenauer 
Date: 05-03-2016 05:12:04
Message:
HADOOP-12892. fix/rewrite create-release (aw)
{noformat}

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13072) WindowsGetSpaceUsed constructor should be public

2016-05-02 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-13072:
---
Attachment: HADOOP-13072-02.patch

Updated patch to trim to 80 chars.

> WindowsGetSpaceUsed constructor should be public
> 
>
> Key: HADOOP-13072
> URL: https://issues.apache.org/jira/browse/HADOOP-13072
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>  Labels: windows
> Attachments: HADOOP-13072-01.patch, HADOOP-13072-02.patch
>
>
> WindowsGetSpaceUsed constructor should be made public.
> Otherwise building using builder will not work.
> {noformat}2016-04-29 12:49:37,455 [Thread-108] WARN  fs.GetSpaceUsed$Builder 
> (GetSpaceUsed.java:build(127)) - Doesn't look like the class class 
> org.apache.hadoop.fs.WindowsGetSpaceUsed have the needed constructor
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.fs.WindowsGetSpaceUsed.(org.apache.hadoop.fs.GetSpaceUsed$Builder)
>   at java.lang.Class.getConstructor0(Unknown Source)
>   at java.lang.Class.getConstructor(Unknown Source)
>   at 
> org.apache.hadoop.fs.GetSpaceUsed$Builder.build(GetSpaceUsed.java:118)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.(BlockPoolSlice.java:165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:915)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:907)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:413)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org