[jira] [Commented] (HADOOP-9321) fix coverage org.apache.hadoop.net

2016-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354491#comment-15354491
 ] 

Hudson commented on HADOOP-9321:


SUCCESS: Integrated in Hadoop-trunk-Commit #10030 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10030/])
HADOOP-9321. fix coverage org.apache.hadoop.net (Ivan A. Veselovsky via (aw: 
rev 1faaa6907852b193cc5ac34f25d6ae41a1f10e61)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSocketFactory.java


> fix coverage  org.apache.hadoop.net
> ---
>
> Key: HADOOP-9321
> URL: https://issues.apache.org/jira/browse/HADOOP-9321
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 2.3.0, 3.0.0-alpha1
>Reporter: Aleksey Gorshkov
>Assignee: Ivan A. Veselovsky
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9321-trunk-a.patch, HADOOP-9321-trunk-b.patch, 
> HADOOP-9321-trunk-d.patch, HADOOP-9321-trunk.patch
>
>
> fix coverage  org.apache.hadoop.net
> HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9330) Add custom JUnit4 test runner with configurable timeout

2016-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354492#comment-15354492
 ] 

Hudson commented on HADOOP-9330:


SUCCESS: Integrated in Hadoop-trunk-Commit #10030 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10030/])
HADOOP-9330. Add custom JUnit4 test runner with configurable timeout (aw: rev 
610363559135a725499cf46e256424d16bec98a3)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/HadoopTestBase.java


> Add custom JUnit4 test runner with configurable timeout
> ---
>
> Key: HADOOP-9330
> URL: https://issues.apache.org/jira/browse/HADOOP-9330
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9330-timeouts-1.patch
>
>
> HADOOP-9112 has added a requirement for all new test methods to declare a 
> timeout, so that jenkins/maven builds will have better information on a 
> timeout.
> Hard coding timeouts into tests is dangerous as it will generate spurious 
> failures on slower machines/networks and when debugging a test.
> I propose providing a custom JUnit4 test runner that test cases can declare 
> as their test runner; this can provide timeouts specified at run-time, rather 
> than in-source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9330) Add custom JUnit4 test runner with configurable timeout

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9330:
-
Labels:   (was: BB2015-05-TBR)

> Add custom JUnit4 test runner with configurable timeout
> ---
>
> Key: HADOOP-9330
> URL: https://issues.apache.org/jira/browse/HADOOP-9330
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9330-timeouts-1.patch
>
>
> HADOOP-9112 has added a requirement for all new test methods to declare a 
> timeout, so that jenkins/maven builds will have better information on a 
> timeout.
> Hard coding timeouts into tests is dangerous as it will generate spurious 
> failures on slower machines/networks and when debugging a test.
> I propose providing a custom JUnit4 test runner that test cases can declare 
> as their test runner; this can provide timeouts specified at run-time, rather 
> than in-source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9330) Add custom JUnit4 test runner with configurable timeout

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9330:
-
   Resolution: Fixed
 Assignee: Steve Loughran
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

+1 committed to trunk.

Thanks!

> Add custom JUnit4 test runner with configurable timeout
> ---
>
> Key: HADOOP-9330
> URL: https://issues.apache.org/jira/browse/HADOOP-9330
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9330-timeouts-1.patch
>
>
> HADOOP-9112 has added a requirement for all new test methods to declare a 
> timeout, so that jenkins/maven builds will have better information on a 
> timeout.
> Hard coding timeouts into tests is dangerous as it will generate spurious 
> failures on slower machines/networks and when debugging a test.
> I propose providing a custom JUnit4 test runner that test cases can declare 
> as their test runner; this can provide timeouts specified at run-time, rather 
> than in-source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9321) fix coverage org.apache.hadoop.net

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9321:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

+1 committed to trunk.

Thanks!

> fix coverage  org.apache.hadoop.net
> ---
>
> Key: HADOOP-9321
> URL: https://issues.apache.org/jira/browse/HADOOP-9321
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 2.3.0, 3.0.0-alpha1
>Reporter: Aleksey Gorshkov
>Assignee: Ivan A. Veselovsky
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9321-trunk-a.patch, HADOOP-9321-trunk-b.patch, 
> HADOOP-9321-trunk-d.patch, HADOOP-9321-trunk.patch
>
>
> fix coverage  org.apache.hadoop.net
> HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12864) Remove bin/rcc script

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354476#comment-15354476
 ] 

Allen Wittenauer commented on HADOOP-12864:
---

Thanks Andrew!

> Remove bin/rcc script
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13034) Log message about input options in distcp lacks some items

2016-06-28 Thread Takashi Ohnishi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354451#comment-15354451
 ] 

Takashi Ohnishi commented on HADOOP-13034:
--

Thank you [~aw] for committing!
Thank you [~templedf] and [~steve_l] for reveiwing!!

> Log message about input options in distcp lacks some items
> --
>
> Key: HADOOP-13034
> URL: https://issues.apache.org/jira/browse/HADOOP-13034
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13034.1.patch
>
>
> The log message in running distcp does not show some options, i.e. append, 
> useDiff and snapshot names.
> {code}
> 16/04/18 21:57:36 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100.0, 
> copyStrategy='uniformsize', preserveStatus=[], preserveRawXattrs=false, 
> atomicWorkPath=null, logPath=null, sourceFileListing=null, 
> sourcePaths=[/user/hadoop/source], targetPath=/user/hadoop/target, 
> targetPathExists=true, filtersFile='null'}
> {code}
> I think that this message is useful for debugging and so it is better to add 
> the lacked options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11823) Checking for Verifier in RPC Denied Reply

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354380#comment-15354380
 ] 

Hadoop QA commented on HADOOP-11823:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Issue | HADOOP-11823 |
| GITHUB PR | https://github.com/apache/hadoop/pull/106 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 62510d658457 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 77031a9 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9896/testReport/ |
| modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9896/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Checking for Verifier in RPC Denied Reply
> -
>
> Key: HADOOP-11823
> URL: https://issues.apache.org/jira/browse/HADOOP-11823
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Gokul Soundararajan
>Assignee: Pradeep Nayak Udupi Kadbet
>  Labels: newbie
> Fix For: 2.6.0, 2.7.0
>
> Attachments: HADOOP-11823.patch
>
>   Original 

[jira] [Commented] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354214#comment-15354214
 ] 

Hadoop QA commented on HADOOP-12345:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-common-project/hadoop-nfs: The patch 
generated 1 new + 11 unchanged - 0 fixed = 12 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Issue | HADOOP-12345 |
| GITHUB PR | https://github.com/apache/hadoop/pull/104 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8bf9e45b11d3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 77031a9 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9895/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-nfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9895/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9895/testReport/ |
| modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9895/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue 

[jira] [Updated] (HADOOP-11823) Checking for Verifier in RPC Denied Reply

2016-06-28 Thread Pradeep Nayak Udupi Kadbet (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pradeep Nayak Udupi Kadbet updated HADOOP-11823:

Attachment: HADOOP-11823.patch

Attaching the patch here as requested by Andrew

> Checking for Verifier in RPC Denied Reply
> -
>
> Key: HADOOP-11823
> URL: https://issues.apache.org/jira/browse/HADOOP-11823
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Gokul Soundararajan
>Assignee: Pradeep Nayak Udupi Kadbet
>  Labels: newbie
> Fix For: 2.6.0, 2.7.0
>
> Attachments: HADOOP-11823.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hi,
> There is a bug in the way hadoop-nfs parses the reply for a RPC denied 
> message. Specifically, this happens in RpcDeniedReply.java line #50.
> When RPC returns a denied reply, the code should not check for a verifier. It 
> is a bug as it doesn't match the RPC protocol. (See Page 33 from NFS 
> Illustrated book). 
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Thanks,
> Gokul



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11823) Checking for Verifier in RPC Denied Reply

2016-06-28 Thread Pradeep Nayak Udupi Kadbet (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pradeep Nayak Udupi Kadbet updated HADOOP-11823:

   Labels: newbie  (was: )
Fix Version/s: 2.6.0
   2.7.0
Affects Version/s: 2.7.0
 Target Version/s: 2.7.0, 2.6.0, 2.8.0  (was: 2.8.0)
   Status: Patch Available  (was: Open)

I have a patch ready for this issue. Attached the patch as well.

> Checking for Verifier in RPC Denied Reply
> -
>
> Key: HADOOP-11823
> URL: https://issues.apache.org/jira/browse/HADOOP-11823
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0, 2.6.0
>Reporter: Gokul Soundararajan
>Assignee: Pradeep Nayak Udupi Kadbet
>  Labels: newbie
> Fix For: 2.7.0, 2.6.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hi,
> There is a bug in the way hadoop-nfs parses the reply for a RPC denied 
> message. Specifically, this happens in RpcDeniedReply.java line #50.
> When RPC returns a denied reply, the code should not check for a verifier. It 
> is a bug as it doesn't match the RPC protocol. (See Page 33 from NFS 
> Illustrated book). 
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Thanks,
> Gokul



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Pradeep Nayak Udupi Kadbet (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354184#comment-15354184
 ] 

Pradeep Nayak Udupi Kadbet commented on HADOOP-12345:
-

I have attached the new version of the patch here. 

Also added test cases for hostnames being multiple of 4 and not being multiple 
of 4. I need to add a couple of public methods to the class for this to happen.

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Assignee: Pradeep Nayak Udupi Kadbet
>Priority: Critical
> Attachments: HADOOP-12345.001.patch, HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Pradeep Nayak Udupi Kadbet (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pradeep Nayak Udupi Kadbet updated HADOOP-12345:

Attachment: HADOOP-12345.001.patch

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Assignee: Pradeep Nayak Udupi Kadbet
>Priority: Critical
> Attachments: HADOOP-12345.001.patch, HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Pradeep Nayak Udupi Kadbet (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pradeep Nayak Udupi Kadbet updated HADOOP-12345:

Attachment: (was: HADOOP-12345.patch.001)

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Assignee: Pradeep Nayak Udupi Kadbet
>Priority: Critical
> Attachments: HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Pradeep Nayak Udupi Kadbet (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pradeep Nayak Udupi Kadbet updated HADOOP-12345:

Attachment: HADOOP-12345.patch.001

Updated patch file after addressing review comments from Andrew

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Assignee: Pradeep Nayak Udupi Kadbet
>Priority: Critical
> Attachments: HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353998#comment-15353998
 ] 

Andrew Wang commented on HADOOP-12345:
--

I normally just review the patch on JIRA, so you can just post it here.

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Assignee: Pradeep Nayak Udupi Kadbet
>Priority: Critical
> Attachments: HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Pradeep Nayak Udupi Kadbet (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353993#comment-15353993
 ] 

Pradeep Nayak Udupi Kadbet commented on HADOOP-12345:
-

Ok, I will put the patch file here after the code review. Does that sound okay ?

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Assignee: Pradeep Nayak Udupi Kadbet
>Priority: Critical
> Attachments: HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Pradeep Nayak Udupi Kadbet (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353992#comment-15353992
 ] 

Pradeep Nayak Udupi Kadbet commented on HADOOP-12345:
-

Oops! I meant to say len % 4 != 0. I will update the pull request with the 
latest change. I will incorporate your suggestion.

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Assignee: Pradeep Nayak Udupi Kadbet
>Priority: Critical
> Attachments: HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11823) Checking for Verifier in RPC Denied Reply

2016-06-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353988#comment-15353988
 ] 

Andrew Wang commented on HADOOP-11823:
--

Hi [~pradeepu] could you also post a patch and then click the "Submit Patch" 
button so we can get a precommit run?

Also, I don't have a copy of the NFS Illustrated book, is this same information 
in the RFC somewhere? Thanks.

> Checking for Verifier in RPC Denied Reply
> -
>
> Key: HADOOP-11823
> URL: https://issues.apache.org/jira/browse/HADOOP-11823
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Gokul Soundararajan
>Assignee: Pradeep Nayak Udupi Kadbet
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hi,
> There is a bug in the way hadoop-nfs parses the reply for a RPC denied 
> message. Specifically, this happens in RpcDeniedReply.java line #50.
> When RPC returns a denied reply, the code should not check for a verifier. It 
> is a bug as it doesn't match the RPC protocol. (See Page 33 from NFS 
> Illustrated book). 
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Thanks,
> Gokul



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353986#comment-15353986
 ] 

Andrew Wang commented on HADOOP-12345:
--

Also, do you mind posting a patch here on JIRA? We use github optionally for 
code review, but don't have our precommit bot watching github PRs.

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Assignee: Pradeep Nayak Udupi Kadbet
>Priority: Critical
> Attachments: HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11823) Checking for Verifier in RPC Denied Reply

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11823:
-
Assignee: Pradeep Nayak Udupi Kadbet  (was: Brandon Li)

> Checking for Verifier in RPC Denied Reply
> -
>
> Key: HADOOP-11823
> URL: https://issues.apache.org/jira/browse/HADOOP-11823
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Gokul Soundararajan
>Assignee: Pradeep Nayak Udupi Kadbet
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hi,
> There is a bug in the way hadoop-nfs parses the reply for a RPC denied 
> message. Specifically, this happens in RpcDeniedReply.java line #50.
> When RPC returns a denied reply, the code should not check for a verifier. It 
> is a bug as it doesn't match the RPC protocol. (See Page 33 from NFS 
> Illustrated book). 
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Thanks,
> Gokul



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12345:
-
Assignee: Pradeep Nayak Udupi Kadbet

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Assignee: Pradeep Nayak Udupi Kadbet
>Priority: Critical
> Attachments: HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11823) Checking for Verifier in RPC Denied Reply

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11823:
-
Priority: Major  (was: Blocker)

> Checking for Verifier in RPC Denied Reply
> -
>
> Key: HADOOP-11823
> URL: https://issues.apache.org/jira/browse/HADOOP-11823
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Gokul Soundararajan
>Assignee: Brandon Li
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hi,
> There is a bug in the way hadoop-nfs parses the reply for a RPC denied 
> message. Specifically, this happens in RpcDeniedReply.java line #50.
> When RPC returns a denied reply, the code should not check for a verifier. It 
> is a bug as it doesn't match the RPC protocol. (See Page 33 from NFS 
> Illustrated book). 
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Thanks,
> Gokul



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11823) Checking for Verifier in RPC Denied Reply

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11823:
-
Target Version/s: 2.8.0
Priority: Blocker  (was: Major)

> Checking for Verifier in RPC Denied Reply
> -
>
> Key: HADOOP-11823
> URL: https://issues.apache.org/jira/browse/HADOOP-11823
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Gokul Soundararajan
>Assignee: Brandon Li
>Priority: Blocker
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hi,
> There is a bug in the way hadoop-nfs parses the reply for a RPC denied 
> message. Specifically, this happens in RpcDeniedReply.java line #50.
> When RPC returns a denied reply, the code should not check for a verifier. It 
> is a bug as it doesn't match the RPC protocol. (See Page 33 from NFS 
> Illustrated book). 
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Thanks,
> Gokul



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11823) Checking for Verifier in RPC Denied Reply

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11823:
-
Priority: Major  (was: Blocker)

> Checking for Verifier in RPC Denied Reply
> -
>
> Key: HADOOP-11823
> URL: https://issues.apache.org/jira/browse/HADOOP-11823
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Gokul Soundararajan
>Assignee: Brandon Li
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hi,
> There is a bug in the way hadoop-nfs parses the reply for a RPC denied 
> message. Specifically, this happens in RpcDeniedReply.java line #50.
> When RPC returns a denied reply, the code should not check for a verifier. It 
> is a bug as it doesn't match the RPC protocol. (See Page 33 from NFS 
> Illustrated book). 
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Thanks,
> Gokul



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

2016-06-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353974#comment-15353974
 ] 

Andrew Wang commented on HADOOP-12345:
--

Looked at the PR. Can we add a test for the length % 4 == 0 case? I think the 
current code is not quite right:

{code}
+int padding = 0;
+// we do not need compute padding if the hostname is already a multiple of 
4
+if (mHostName.getBytes(Charsets.UTF_8).length != 0) {
+  padding = 4 - (mHostName.getBytes(Charsets.UTF_8).length % 4);
+}
{code}

I think you meant to check that the len % 4 != 0. I think it'd be even better 
though if we just {{%4}} the padding one more time, saves the if statement.

Also we only need that comment about the padding once, can delete the second 
one.

> Credential length in CredentialsSys.java incorrect
> --
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pradeep Nayak Udupi Kadbet
>Priority: Critical
> Attachments: HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in 
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we 
> set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and 
> this is okay.
> However when we add the length of the hostname to this, we are not adding the 
> extra padded bytes for the hostname (If the length is not a multiple of 4) 
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS 
> because it doesn't read the uid field when it is expected to read. I can 
> reproduce this issue constantly on machines where the hostname length is not 
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353951#comment-15353951
 ] 

Andrew Wang commented on HADOOP-13184:
--

I have the same feedback as Shane about the feather. +1 for option 4.

> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>Assignee: Abhishek
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13289) Remove unused variables in TestFairCallQueue

2016-06-28 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-13289:
-
Assignee: Ye Zhou

> Remove unused variables in TestFairCallQueue
> 
>
> Key: HADOOP-13289
> URL: https://issues.apache.org/jira/browse/HADOOP-13289
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Ye Zhou
>  Labels: newbie
>
> # Remove unused member {{alwaysZeroScheduler}} and related initialization in 
> {{TestFairCallQueue}}
> # Remove unused local vriable {{sched}} in 
> {{testOfferSucceedsWhenScheduledLowPriority()}}
> And propagate to applicable release branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13289) Remove unused variables in TestFairCallQueue

2016-06-28 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-13289:
-
Issue Type: Improvement  (was: Bug)

> Remove unused variables in TestFairCallQueue
> 
>
> Key: HADOOP-13289
> URL: https://issues.apache.org/jira/browse/HADOOP-13289
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Konstantin Shvachko
>  Labels: newbie
>
> # Remove unused member {{alwaysZeroScheduler}} and related initialization in 
> {{TestFairCallQueue}}
> # Remove unused local vriable {{sched}} in 
> {{testOfferSucceedsWhenScheduledLowPriority()}}
> And propagate to applicable release branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13326) Broken link on libhdfs wiki docs

2016-06-28 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353910#comment-15353910
 ] 

Akira Ajisaka commented on HADOOP-13326:


Updated the wiki. Closing this issue.

> Broken link on libhdfs wiki docs
> 
>
> Key: HADOOP-13326
> URL: https://issues.apache.org/jira/browse/HADOOP-13326
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
> Environment: Windows 10, Google Chrome 
>Reporter: Jonathan Goldfarb
>Priority: Minor
>
> Not sure where best to report this (if not here, apologies for the noise,) 
> but the link here: https://wiki.apache.org/hadoop/LibHDFS to "test cases" is 
> broken



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13326) Broken link on libhdfs wiki docs

2016-06-28 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-13326.

Resolution: Done

> Broken link on libhdfs wiki docs
> 
>
> Key: HADOOP-13326
> URL: https://issues.apache.org/jira/browse/HADOOP-13326
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
> Environment: Windows 10, Google Chrome 
>Reporter: Jonathan Goldfarb
>Priority: Minor
>
> Not sure where best to report this (if not here, apologies for the noise,) 
> but the link here: https://wiki.apache.org/hadoop/LibHDFS to "test cases" is 
> broken



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13292) Erasure Code misfunctions when 3 DataNode down

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13292:
-
Affects Version/s: (was: HDFS-7285)
   3.0.0-alpha1

> Erasure Code misfunctions when 3 DataNode down
> --
>
> Key: HADOOP-13292
> URL: https://issues.apache.org/jira/browse/HADOOP-13292
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
> Environment: 9 DataNode and 1 NameNode,Erasured code policy is 
> set as "6--3",   When 3 DataNode down,  erasured code fails and an exception 
> is thrown
>Reporter: gao shan
>
> The following is the steps to reproduce:
> 1) hadoop fs -mkdir /ec
> 2) set erasured code policy as "6-3"
> 3) "write" data by : 
> time hadoop jar 
> /opt/hadoop/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar
>   TestDFSIO -D test.build.data=/ec -write -nrFiles 30 -fileSize 12288 
> -bufferSize 1073741824
> 4) Manually down 3 nodes.  Kill the threads of  "datanode" and "nodemanager" 
> in 3 DataNode.
> 5) By using erasured code to "read" data by:
> time hadoop jar 
> /opt/hadoop/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar
>   TestDFSIO -D test.build.data=/ec -read -nrFiles 30 -fileSize 12288 
> -bufferSize 1073741824
> then the failure occurs and the exception is thrown as:
> INFO mapreduce.Job: Task Id : attempt_1465445965249_0008_m_34_2, Status : 
> FAILED
> Error: java.io.IOException: 4 missing blocks, the stripe is: Offset=0, 
> length=8388608, fetchedChunksNum=0, missingChunksNum=4
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.checkMissingBlocks(DFSStripedInputStream.java:614)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readParityChunks(DFSStripedInputStream.java:647)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:762)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:316)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:450)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:941)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at org.apache.hadoop.fs.TestDFSIO$ReadMapper.doIO(TestDFSIO.java:531)
>   at org.apache.hadoop.fs.TestDFSIO$ReadMapper.doIO(TestDFSIO.java:508)
>   at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:134)
>   at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:37)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13290) Appropriate use of generics in FairCallQueue

2016-06-28 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-13290:
-
Issue Type: Improvement  (was: Bug)

> Appropriate use of generics in FairCallQueue
> 
>
> Key: HADOOP-13290
> URL: https://issues.apache.org/jira/browse/HADOOP-13290
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>  Labels: newbie++
>
> # {{BlockingQueue}} is intermittently used with and without generic 
> parameters in {{FairCallQueue}} class. Should be parameterized.
> # Same for {{FairCallQueue}}. Should be parameterized. Could be a bit more 
> tricky for that one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13326) Broken link on libhdfs wiki docs

2016-06-28 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353897#comment-15353897
 ] 

Akira Ajisaka commented on HADOOP-13326:


FYI: If you want to edit Hadoop wiki, create your wiki account and mail to 
common-...@hadoop.apache.org.

{noformat}
I want to edit Hadoop wiki. My wiki account id is "".
{noformat}

> Broken link on libhdfs wiki docs
> 
>
> Key: HADOOP-13326
> URL: https://issues.apache.org/jira/browse/HADOOP-13326
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
> Environment: Windows 10, Google Chrome 
>Reporter: Jonathan Goldfarb
>Priority: Minor
>
> Not sure where best to report this (if not here, apologies for the noise,) 
> but the link here: https://wiki.apache.org/hadoop/LibHDFS to "test cases" is 
> broken



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13326) Broken link on libhdfs wiki docs

2016-06-28 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353891#comment-15353891
 ] 

Akira Ajisaka commented on HADOOP-13326:


I'm thinking hdfs-...@hadoop.apache.org is the right place to report because 
this issue is not related to Apache Hadoop source code.

I'll update the wiki to edit the link to 
https://git-wip-us.apache.org/repos/asf?p=hadoop.git;a=tree;f=hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests
 shortly.

Thanks [~jgoldfar] for reporting this.

> Broken link on libhdfs wiki docs
> 
>
> Key: HADOOP-13326
> URL: https://issues.apache.org/jira/browse/HADOOP-13326
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
> Environment: Windows 10, Google Chrome 
>Reporter: Jonathan Goldfarb
>Priority: Minor
>
> Not sure where best to report this (if not here, apologies for the noise,) 
> but the link here: https://wiki.apache.org/hadoop/LibHDFS to "test cases" is 
> broken



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-06-28 Thread Federico Czerwinski (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353884#comment-15353884
 ] 

Federico Czerwinski commented on HADOOP-13075:
--

That patch is only a proof of concept and doesn't support SSE-C. I have this 
ticket almost done, just fixing a few tests.

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11862) Add support key replicas mechanism for KMS HA

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-11862.
--
Resolution: Not A Problem

Resolving, since a full HA story for the KMS also requires a HA backing key 
provider. Thanks for the nice responses Arun!

> Add support key replicas mechanism for KMS HA
> -
>
> Key: HADOOP-11862
> URL: https://issues.apache.org/jira/browse/HADOOP-11862
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: dengxiumao
>  Labels: kms
>
> The patch [HADOOP-11620|https://issues.apache.org/jira/browse/HADOOP-11620] 
> only supports specification of multiple hostnames in the kms key provider 
> uri. it means that it support config as:
> {quote}
> 
>  hadoop.security.key.provider.path
>  kms://http@[HOSTNAME1];[HOSTNAME2]:16000/kms
> 
> {quote}
> but HA is still not available,  if one of KMS instances goes down, Encrypted 
> files, which encrypted by the keys in the KMS,  can not be read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13303) Detail Informations of KMS High Avalibale

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-13303.
--
Resolution: Invalid

Please use the user list for questions like this, JIRA is for tracking product 
defects and code changes. Thanks!

> Detail Informations of KMS High Avalibale
> -
>
> Key: HADOOP-13303
> URL: https://issues.apache.org/jira/browse/HADOOP-13303
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha, kms
>Affects Versions: 2.7.2
>Reporter: qiushi fan
>
> I have some confusions of kms HA recently. 
> 1. we can set up multiple KMS instances  behind a load balancer. Among all 
> these kms instances, there is only one master kms, others are slave kms. The 
> master kms can handle Key create/store/rollover/delete operations by directly 
> contacting with JCE keystore file. The slave kms can handle  Key 
> create/store/rollover/delete operations by delegating it to the master kms.
> so although we set up multiple kms, there is only one  JCE keystore file, and 
> only the master kms can access to this file.   Both the JCE keystore file and 
> the master kms don't have a backup. If one of them died, there is no way to 
> avoid losing data.
> Is all of the above true? KMS doesn't have a solution to handle the failure 
> of master kms and  JCE keystore file?
> 2. I heard another way to achieve kms HA: make use of 
> LoadBalancingKMSClientProvider. But  I can't find detail informations of 
> LoadBalancingKMSClientProvider.  So why the  LoadBalancingKMSClientProvider 
> can achieve kms HA?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12864) Remove bin/rcc script

2016-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353750#comment-15353750
 ] 

Hudson commented on HADOOP-12864:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10029 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10029/])
HADOOP-12864. Remove bin/rcc script. Contributed by Allen Wittenauer. (wang: 
rev 77031a9c37e7e72f8825b9e22aa35b238e924576)
* hadoop-common-project/hadoop-common/src/main/bin/rcc


> Remove bin/rcc script
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12864) Remove bin/rcc script

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12864:
-
  Resolution: Fixed
   Fix Version/s: 3.0.0-alpha1
Target Version/s:   (was: )
  Status: Resolved  (was: Patch Available)

Committed to trunk, thanks Allen!

> Remove bin/rcc script
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12864) Remove bin/rcc script

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12864:
-
Release Note: The rcc command has been removed. See HADOOP-12485 where 
unused Hadoop Streaming classes were removed.  (was: The rcc command has been 
removed.)

> Remove bin/rcc script
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12864) Remove bin/rcc script

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12864:
-
Summary: Remove bin/rcc script  (was: bin/rcc doesn't work on trunk)

> Remove bin/rcc script
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-06-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353740#comment-15353740
 ] 

Andrew Wang commented on HADOOP-12864:
--

The Rcc code disappeared in HADOOP-10485, looks like this was left behind. Nice 
find Allen, LGTM, will commit shortly.

> bin/rcc doesn't work on trunk
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12160) Add snapshot APIs to the FileSystem specification

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353692#comment-15353692
 ] 

Hadoop QA commented on HADOOP-12160:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-12160 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12749521/HADOOP-12160.003.patch
 |
| JIRA Issue | HADOOP-12160 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9894/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add snapshot APIs to the FileSystem specification
> -
>
> Key: HADOOP-12160
> URL: https://issues.apache.org/jira/browse/HADOOP-12160
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, test
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12160.002.patch, HADOOP-12160.003.patch
>
>
> The following snapshot APIs should be documented in the [FileSystem 
> specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html].
> # createSnapshot(Path path)
> # createSnapshot(Path path, String snapshotName)
> # renameSnapshot(Path path, String snapshotOldName, String snapshotNewName)
> # deleteSnapshot(Path path, String snapshotName)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification

2016-06-28 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13327:
---

 Summary: Add OutputStream + Syncable to the Filesystem 
Specification
 Key: HADOOP-13327
 URL: https://issues.apache.org/jira/browse/HADOOP-13327
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran


Write down what a Filesystem output stream should do. While core the API is 
defined in Java, that doesn't say what's expected about visibility, durability, 
etc —and Hadoop Syncable interface is entirely ours to define.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13256) define FileSystem.listStatusIterator, implement contract tests

2016-06-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13256:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-12177

> define FileSystem.listStatusIterator, implement contract tests
> --
>
> Key: HADOOP-13256
> URL: https://issues.apache.org/jira/browse/HADOOP-13256
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>
> HADOOP-10987 added a new listing API to FS, but left out the specification 
> and contract tests. This JIRA covers the task of adding them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12160) Add snapshot APIs to the FileSystem specification

2016-06-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353676#comment-15353676
 ] 

Steve Loughran commented on HADOOP-12160:
-

I've not looked at this —sorry. Can you sync it up with branch-2 and I will do 
my best

> Add snapshot APIs to the FileSystem specification
> -
>
> Key: HADOOP-12160
> URL: https://issues.apache.org/jira/browse/HADOOP-12160
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, test
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12160.002.patch, HADOOP-12160.003.patch
>
>
> The following snapshot APIs should be documented in the [FileSystem 
> specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html].
> # createSnapshot(Path path)
> # createSnapshot(Path path, String snapshotName)
> # renameSnapshot(Path path, String snapshotOldName, String snapshotNewName)
> # deleteSnapshot(Path path, String snapshotName)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13326) Broken link on libhdfs wiki docs

2016-06-28 Thread Jonathan Goldfarb (JIRA)
Jonathan Goldfarb created HADOOP-13326:
--

 Summary: Broken link on libhdfs wiki docs
 Key: HADOOP-13326
 URL: https://issues.apache.org/jira/browse/HADOOP-13326
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
 Environment: Windows 10, Google Chrome 
Reporter: Jonathan Goldfarb
Priority: Minor


Not sure where best to report this (if not here, apologies for the noise,) but 
the link here: https://wiki.apache.org/hadoop/LibHDFS to "test cases" is broken



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11134) Change the default log level of interactive commands from INFO to WARN

2016-06-28 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11134:

Target Version/s:   (was: )
Hadoop Flags: Incompatible change

> Change the default log level of interactive commands from INFO to WARN
> --
>
> Key: HADOOP-11134
> URL: https://issues.apache.org/jira/browse/HADOOP-11134
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Akira Ajisaka
>Assignee: Christopher Buckley
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-11134.001.patch, HADOOP-11134.02.patch
>
>
> Split from HADOOP-7984. Currently bin/hadoop script outputs INFO messages. 
> How about making it to output only WARN and ERROR messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9769) Remove org.apache.hadoop.fs.Stat when JDK6 support is dropped

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353132#comment-15353132
 ] 

Hadoop QA commented on HADOOP-9769:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 6 new + 4 unchanged - 11 fixed = 10 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 44s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestSymlinkLocalFSFileContext |
|   | hadoop.fs.TestSymlinkLocalFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12733840/HADOOP-9769.001.patch 
|
| JIRA Issue | HADOOP-9769 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4ae58e34f48b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 23c3ff8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9893/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9893/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9893/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9893/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove org.apache.hadoop.fs.Stat when JDK6 support is dropped
> 

[jira] [Commented] (HADOOP-9330) Add custom JUnit4 test runner with configurable timeout

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353124#comment-15353124
 ] 

Hadoop QA commented on HADOOP-9330:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 33s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestGroupsCaching |
|   | hadoop.ha.TestZKFailoverController |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12573149/HADOOP-9330-timeouts-1.patch
 |
| JIRA Issue | HADOOP-9330 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cf96a726ac58 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 23c3ff8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9887/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9887/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9887/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9887/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add custom JUnit4 test runner with 

[jira] [Commented] (HADOOP-9769) Remove org.apache.hadoop.fs.Stat when JDK6 support is dropped

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353113#comment-15353113
 ] 

Hadoop QA commented on HADOOP-9769:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 6 new + 4 unchanged - 11 fixed = 10 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  9s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestSymlinkLocalFSFileContext |
|   | hadoop.fs.TestSymlinkLocalFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12733840/HADOOP-9769.001.patch 
|
| JIRA Issue | HADOOP-9769 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c6a233e8297f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 23c3ff8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9889/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9889/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9889/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9889/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove org.apache.hadoop.fs.Stat when JDK6 support is dropped
> 

[jira] [Commented] (HADOOP-13034) Log message about input options in distcp lacks some items

2016-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353100#comment-15353100
 ] 

Hudson commented on HADOOP-13034:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10027 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10027/])
HADOOP-13034. Log message about input options in distcp lacks some items (aw: 
rev 422c73a8657d8699920f7db13d4be200e16c4272)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java


> Log message about input options in distcp lacks some items
> --
>
> Key: HADOOP-13034
> URL: https://issues.apache.org/jira/browse/HADOOP-13034
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13034.1.patch
>
>
> The log message in running distcp does not show some options, i.e. append, 
> useDiff and snapshot names.
> {code}
> 16/04/18 21:57:36 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100.0, 
> copyStrategy='uniformsize', preserveStatus=[], preserveRawXattrs=false, 
> atomicWorkPath=null, logPath=null, sourceFileListing=null, 
> sourcePaths=[/user/hadoop/source], targetPath=/user/hadoop/target, 
> targetPathExists=true, filtersFile='null'}
> {code}
> I think that this message is useful for debugging and so it is better to add 
> the lacked options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13034) Log message about input options in distcp lacks some items

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353101#comment-15353101
 ] 

Hadoop QA commented on HADOOP-13034:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
32s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12799247/HADOOP-13034.1.patch |
| JIRA Issue | HADOOP-13034 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0710bfa05f9b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 23c3ff8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9892/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9892/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Log message about input options in distcp lacks some items
> --
>
> Key: HADOOP-13034
> URL: https://issues.apache.org/jira/browse/HADOOP-13034
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13034.1.patch
>
>
> The log message in running distcp does not show some options, i.e. 

[jira] [Commented] (HADOOP-9888) KerberosName static initialization gets default realm, which is unneeded in non-secure deployment.

2016-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353099#comment-15353099
 ] 

Hudson commented on HADOOP-9888:


SUCCESS: Integrated in Hadoop-trunk-Commit #10027 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10027/])
HADOOP-9888. KerberosName static initialization gets default realm, (aw: rev 
be38e530bb23b134758e29c9101f98cf4e1d2c38)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java


> KerberosName static initialization gets default realm, which is unneeded in 
> non-secure deployment.
> --
>
> Key: HADOOP-9888
> URL: https://issues.apache.org/jira/browse/HADOOP-9888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.1.1-beta, 3.0.0-alpha1
>Reporter: Chris Nauroth
>Assignee: Dmytro Kabakchei
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9888.001.patch
>
>
> {{KerberosName}} has a static initialization block that looks up the default 
> realm.  Running with Oracle JDK7, this code path triggers a DNS query.  In 
> some environments, we've seen this DNS query block and time out after 30 
> seconds.  This is part of static initialization, and the class is referenced 
> from {{UserGroupInformation#initialize}}, so every daemon and every shell 
> command experiences this delay.  This occurs even for non-secure deployments, 
> which don't need the default realm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9321) fix coverage org.apache.hadoop.net

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353088#comment-15353088
 ] 

Hadoop QA commented on HADOOP-9321:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 3 unchanged - 1 fixed = 8 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
59s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12610811/HADOOP-9321-trunk-d.patch
 |
| JIRA Issue | HADOOP-9321 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ebcc3c693083 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 23c3ff8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9888/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9888/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9888/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9888/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> fix coverage  org.apache.hadoop.net
> ---
>
> Key: HADOOP-9321
> URL: https://issues.apache.org/jira/browse/HADOOP-9321
> Project: Hadoop 

[jira] [Updated] (HADOOP-9888) KerberosName static initialization gets default realm, which is unneeded in non-secure deployment.

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9888:
-
  Resolution: Fixed
   Fix Version/s: 3.0.0-alpha1
Target Version/s:   (was: )
  Status: Resolved  (was: Patch Available)

+1 committed to trunk.

Thanks!

> KerberosName static initialization gets default realm, which is unneeded in 
> non-secure deployment.
> --
>
> Key: HADOOP-9888
> URL: https://issues.apache.org/jira/browse/HADOOP-9888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.1.1-beta, 3.0.0-alpha1
>Reporter: Chris Nauroth
>Assignee: Dmytro Kabakchei
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9888.001.patch
>
>
> {{KerberosName}} has a static initialization block that looks up the default 
> realm.  Running with Oracle JDK7, this code path triggers a DNS query.  In 
> some environments, we've seen this DNS query block and time out after 30 
> seconds.  This is part of static initialization, and the class is referenced 
> from {{UserGroupInformation#initialize}}, so every daemon and every shell 
> command experiences this delay.  This occurs even for non-secure deployments, 
> which don't need the default realm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13034) Log message about input options in distcp lacks some items

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13034:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

+1 committed to trunk.

Thanks!

> Log message about input options in distcp lacks some items
> --
>
> Key: HADOOP-13034
> URL: https://issues.apache.org/jira/browse/HADOOP-13034
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13034.1.patch
>
>
> The log message in running distcp does not show some options, i.e. append, 
> useDiff and snapshot names.
> {code}
> 16/04/18 21:57:36 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100.0, 
> copyStrategy='uniformsize', preserveStatus=[], preserveRawXattrs=false, 
> atomicWorkPath=null, logPath=null, sourceFileListing=null, 
> sourcePaths=[/user/hadoop/source], targetPath=/user/hadoop/target, 
> targetPathExists=true, filtersFile='null'}
> {code}
> I think that this message is useful for debugging and so it is better to add 
> the lacked options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9888) KerberosName static initialization gets default realm, which is unneeded in non-secure deployment.

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353055#comment-15353055
 ] 

Hadoop QA commented on HADOOP-9888:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-common-project/hadoop-auth: The patch 
generated 0 new + 11 unchanged - 2 fixed = 11 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
24s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780572/HADOOP-9888.001.patch 
|
| JIRA Issue | HADOOP-9888 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3ec6cd2d1c85 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 23c3ff8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9885/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9885/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KerberosName static initialization gets default realm, which is unneeded in 
> non-secure deployment.
> --
>
> Key: HADOOP-9888
> URL: https://issues.apache.org/jira/browse/HADOOP-9888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
> 

[jira] [Commented] (HADOOP-13034) Log message about input options in distcp lacks some items

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353052#comment-15353052
 ] 

Hadoop QA commented on HADOOP-13034:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
15s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12799247/HADOOP-13034.1.patch |
| JIRA Issue | HADOOP-13034 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0c5a2574f2fc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 23c3ff8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9886/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9886/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Log message about input options in distcp lacks some items
> --
>
> Key: HADOOP-13034
> URL: https://issues.apache.org/jira/browse/HADOOP-13034
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
>Priority: Minor
> Attachments: HADOOP-13034.1.patch
>
>
> The log message in running distcp does not show some options, i.e. append, 
> useDiff and snapshot 

[jira] [Updated] (HADOOP-9769) Remove org.apache.hadoop.fs.Stat when JDK6 support is dropped

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9769:
-
Target Version/s: 3.0.0-alpha1
Priority: Major  (was: Minor)

> Remove org.apache.hadoop.fs.Stat when JDK6 support is dropped
> -
>
> Key: HADOOP-9769
> URL: https://issues.apache.org/jira/browse/HADOOP-9769
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Kengo Seki
> Attachments: HADOOP-9769.001.patch
>
>
> HADOOP-9652 introduces a new class which shells out to stat(1) because of the 
> lack of lstat(2) in Java 6. Java 7 has support for reading symlink targets 
> via {{Files#readSymbolicLink}}.
> When Hadoop drops Java 6 support, let's use this more portable method instead.
> See:
> http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#readSymbolicLink(java.nio.file.Path)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12168) Clean undeclared used dependencies

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353038#comment-15353038
 ] 

Hadoop QA commented on HADOOP-12168:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-12168 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12753402/HADOOP-12168.6.patch |
| JIRA Issue | HADOOP-12168 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9890/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean undeclared used dependencies
> --
>
> Key: HADOOP-12168
> URL: https://issues.apache.org/jira/browse/HADOOP-12168
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
> Attachments: HADOOP-12168.1.patch, HADOOP-12168.2.patch, 
> HADOOP-12168.3.patch, HADOOP-12168.4.patch, HADOOP-12168.5.patch, 
> HADOOP-12168.6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9769) Remove org.apache.hadoop.fs.Stat when JDK6 support is dropped

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353039#comment-15353039
 ] 

Allen Wittenauer commented on HADOOP-9769:
--

Updating this to major because portability is a pain in the ass.

> Remove org.apache.hadoop.fs.Stat when JDK6 support is dropped
> -
>
> Key: HADOOP-9769
> URL: https://issues.apache.org/jira/browse/HADOOP-9769
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Kengo Seki
> Attachments: HADOOP-9769.001.patch
>
>
> HADOOP-9652 introduces a new class which shells out to stat(1) because of the 
> lack of lstat(2) in Java 6. Java 7 has support for reading symlink targets 
> via {{Files#readSymbolicLink}}.
> When Hadoop drops Java 6 support, let's use this more portable method instead.
> See:
> http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#readSymbolicLink(java.nio.file.Path)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353034#comment-15353034
 ] 

Hadoop QA commented on HADOOP-13252:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 23s{color} | {color:orange} root: The patch generated 2 new + 8 unchanged - 
0 fixed = 10 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
35s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353018#comment-15353018
 ] 

Allen Wittenauer commented on HADOOP-12864:
---

Ping [~andrew.wang].  We need this in for 3.x. Thanks.

> bin/rcc doesn't work on trunk
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12168) Clean undeclared used dependencies

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353035#comment-15353035
 ] 

Allen Wittenauer commented on HADOOP-12168:
---

bq. 485m 4s 

This is bumping up against the Jenkins limit.  It should really be broken up 
into multiple patches.

> Clean undeclared used dependencies
> --
>
> Key: HADOOP-12168
> URL: https://issues.apache.org/jira/browse/HADOOP-12168
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
> Attachments: HADOOP-12168.1.patch, HADOOP-12168.2.patch, 
> HADOOP-12168.3.patch, HADOOP-12168.4.patch, HADOOP-12168.5.patch, 
> HADOOP-12168.6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12486) Mockito missing in pom.xml of hadoop-kafka

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12486:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Mockito missing in pom.xml of hadoop-kafka
> --
>
> Key: HADOOP-12486
> URL: https://issues.apache.org/jira/browse/HADOOP-12486
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
> Attachments: HADOOP-12486.01.patch
>
>
> Eclipse will generate build errors without the following:
> {code}
> 
>   org.mockito
>   mockito-all
>   test
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11134) Change the default log level of interactive commands from INFO to WARN

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353029#comment-15353029
 ] 

Hadoop QA commented on HADOOP-11134:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
43s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
0s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814119/HADOOP-11134.02.patch 
|
| JIRA Issue | HADOOP-11134 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux f2a64f919eec 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 23c3ff8 |
| shellcheck | v0.4.4 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9884/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9884/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Change the default log level of interactive commands from INFO to WARN
> --
>
> Key: HADOOP-11134
> URL: https://issues.apache.org/jira/browse/HADOOP-11134
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Akira Ajisaka
>Assignee: Christopher Buckley
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-11134.001.patch, HADOOP-11134.02.patch
>
>
> Split from HADOOP-7984. Currently bin/hadoop script outputs INFO messages. 
> How about making it to output only WARN and ERROR messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9321) fix coverage org.apache.hadoop.net

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9321:
-
Labels:   (was: BB2015-05-TBR)

> fix coverage  org.apache.hadoop.net
> ---
>
> Key: HADOOP-9321
> URL: https://issues.apache.org/jira/browse/HADOOP-9321
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 2.3.0, 3.0.0-alpha1
>Reporter: Aleksey Gorshkov
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9321-trunk-a.patch, HADOOP-9321-trunk-b.patch, 
> HADOOP-9321-trunk-d.patch, HADOOP-9321-trunk.patch
>
>
> fix coverage  org.apache.hadoop.net
> HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10197) Disable additional m2eclipse plugin execution

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10197:
--
Labels:   (was: BB2015-05-TBR)

> Disable additional m2eclipse plugin execution
> -
>
> Key: HADOOP-10197
> URL: https://issues.apache.org/jira/browse/HADOOP-10197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Eric Charles
> Attachments: HADOOP-10197-2.patch, HADOOP-10197.patch
>
>
> M2Eclipse complains on when importing the maven modules into Eclipse.
> We should add more filter in the org.eclipse.m2e.lifecycle-mapping plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2016-06-28 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353002#comment-15353002
 ] 

Masatake Iwasaki commented on HADOOP-12588:
---

Thanks for reporting this, [~ste...@apache.org]. The test log of HADOOP-13323 
indicates there is a race with {{TestMetricsSystemImpl}}. The 
{{TestGangliaMetrics#testGangliaMetrics2}} sets {{*.period}} to 120 but 8 was 
used.

{noformat}
2016-06-27 15:21:31,480 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 8 
second(s).
{noformat}

I will upload additional patch or open another issue if I need major 
refactoring of the test.


> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12588.001.patch, HADOOP-12588.addendum.02.patch, 
> HADOOP-12588.addendum.03.patch, HADOOP-12588.addendum.patch
>
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11134) Change the default log level of interactive commands from INFO to WARN

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11134:
--
Attachment: HADOOP-11134.02.patch

-02:
* change interactives to warn

> Change the default log level of interactive commands from INFO to WARN
> --
>
> Key: HADOOP-11134
> URL: https://issues.apache.org/jira/browse/HADOOP-11134
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Akira Ajisaka
>Assignee: Christopher Buckley
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-11134.001.patch, HADOOP-11134.02.patch
>
>
> Split from HADOOP-7984. Currently bin/hadoop script outputs INFO messages. 
> How about making it to output only WARN and ERROR messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11134) Change the default log level of interactive commands from INFO to WARN

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11134:
--
Summary: Change the default log level of interactive commands from INFO to 
WARN  (was: Change the default log level of bin/hadoop from INFO to WARN)

> Change the default log level of interactive commands from INFO to WARN
> --
>
> Key: HADOOP-11134
> URL: https://issues.apache.org/jira/browse/HADOOP-11134
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Akira Ajisaka
>Assignee: Christopher Buckley
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-11134.001.patch
>
>
> Split from HADOOP-7984. Currently bin/hadoop script outputs INFO messages. 
> How about making it to output only WARN and ERROR messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13209) replace slaves with workers

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352967#comment-15352967
 ] 

Allen Wittenauer commented on HADOOP-13209:
---

I know I, personally, have found it difficult to talk about Hadoop having 
slaves esp with a more diverse audience.  Words do have a certain power.  FWIW, 
lots of other projects are also changing their vocabulary for similar reasons.

> replace slaves with workers
> ---
>
> Key: HADOOP-13209
> URL: https://issues.apache.org/jira/browse/HADOOP-13209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: John Smith
>Assignee: John Smith
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13209.v01.patch, HADOOP-13209.v02.patch
>
>
> slaves.sh and the slaves file should get replace with workers.sh and a 
> workers file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13209) replace slaves with workers

2016-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352962#comment-15352962
 ] 

Hudson commented on HADOOP-13209:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10026 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10026/])
HADOOP-13209. replace slaves with workers (John Smith via aw) (aw: rev 
23c3ff85a9e73d8f0755e14f12cc7c89b72acddd)
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.cmd
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestMapReduceLazyOutput.java
* hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipes.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
* hadoop-common-project/hadoop-common/src/test/scripts/hadoop_workers.bats
* hadoop-common-project/hadoop-common/src/main/conf/workers
* hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
* hadoop-yarn-project/hadoop-yarn/bin/yarn-config.cmd
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* 
hadoop-common-project/hadoop-common/src/main/conf/hadoop-user-functions.sh.example
* hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.sh
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestMRCredentials.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/Federation.md
* hadoop-yarn-project/hadoop-yarn/conf/slaves
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestLazyOutput.java
* hadoop-yarn-project/hadoop-yarn/bin/yarn-daemons.sh
* hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
* hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh
* hadoop-common-project/hadoop-common/src/main/bin/slaves.sh
* hadoop-common-project/hadoop-common/src/test/scripts/hadoop_ssh.bats
* hadoop-common-project/hadoop-common/src/main/java/overview.html
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache/DistributedCache.java
* hadoop-common-project/hadoop-common/src/main/bin/workers.sh
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/overview.html
* hadoop-common-project/hadoop-common/src/main/bin/hadoop
* hadoop-yarn-project/hadoop-yarn/pom.xml
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-secure-dns.sh
* hadoop-yarn-project/hadoop-yarn/bin/yarn
* hadoop-common-project/hadoop-common/src/test/scripts/hadoop_slaves.bats
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-secure-dns.sh
* hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/ReliabilityTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
* hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-dfs.sh


> replace slaves with workers
> ---
>
> Key: HADOOP-13209
> URL: https://issues.apache.org/jira/browse/HADOOP-13209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: John Smith
>Assignee: John Smith
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13209.v01.patch, HADOOP-13209.v02.patch
>
>
> slaves.sh and the slaves file should get replace with workers.sh and a 
> workers file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11134) Change the default log level of bin/hadoop from INFO to WARN

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352963#comment-15352963
 ] 

Allen Wittenauer commented on HADOOP-11134:
---

We should probably revisit this before the 3.x alpha is cut.  Ping 
[~andrew.wang]. I'll update the patch since it doesn't work for 3.x anymore.

> Change the default log level of bin/hadoop from INFO to WARN
> 
>
> Key: HADOOP-11134
> URL: https://issues.apache.org/jira/browse/HADOOP-11134
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Akira Ajisaka
>Assignee: Christopher Buckley
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-11134.001.patch
>
>
> Split from HADOOP-7984. Currently bin/hadoop script outputs INFO messages. 
> How about making it to output only WARN and ERROR messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13235) Use Date and Time API in KafkaSink

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352955#comment-15352955
 ] 

Allen Wittenauer commented on HADOOP-13235:
---

+1

> Use Date and Time API in KafkaSink
> --
>
> Key: HADOOP-13235
> URL: https://issues.apache.org/jira/browse/HADOOP-13235
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: jdk8
> Attachments: HADOOP-13235.01.patch
>
>
> We can use Date and Time API (JSR-310) in trunk code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13209) replace slaves with workers

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13209:
--
Release Note: The 'slaves' file has been deprecated in favor of the 
'workers' file and, other than the deprecation warnings, all references to 
slavery have been removed from the source tree.

> replace slaves with workers
> ---
>
> Key: HADOOP-13209
> URL: https://issues.apache.org/jira/browse/HADOOP-13209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: John Smith
>Assignee: John Smith
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13209.v01.patch, HADOOP-13209.v02.patch
>
>
> slaves.sh and the slaves file should get replace with workers.sh and a 
> workers file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13209) replace slaves with workers

2016-06-28 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352945#comment-15352945
 ] 

Lars Francke commented on HADOOP-13209:
---

It's not like I'm opposed to this change but it was made without giving any 
reason. Can you provide some details on why this was changed?

> replace slaves with workers
> ---
>
> Key: HADOOP-13209
> URL: https://issues.apache.org/jira/browse/HADOOP-13209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: John Smith
>Assignee: John Smith
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13209.v01.patch, HADOOP-13209.v02.patch
>
>
> slaves.sh and the slaves file should get replace with workers.sh and a 
> workers file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13209) replace slaves with workers

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352939#comment-15352939
 ] 

Allen Wittenauer edited comment on HADOOP-13209 at 6/28/16 12:56 PM:
-

+1 LGTM

Thanks! Committed to trunk.


was (Author: aw):
+1 LGTM

> replace slaves with workers
> ---
>
> Key: HADOOP-13209
> URL: https://issues.apache.org/jira/browse/HADOOP-13209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: John Smith
>Assignee: John Smith
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13209.v01.patch, HADOOP-13209.v02.patch
>
>
> slaves.sh and the slaves file should get replace with workers.sh and a 
> workers file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13209) replace slaves with workers

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13209:
--
   Resolution: Fixed
 Hadoop Flags: Incompatible change
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

+1 LGTM

> replace slaves with workers
> ---
>
> Key: HADOOP-13209
> URL: https://issues.apache.org/jira/browse/HADOOP-13209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: John Smith
>Assignee: John Smith
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13209.v01.patch, HADOOP-13209.v02.patch
>
>
> slaves.sh and the slaves file should get replace with workers.sh and a 
> workers file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13209) replace slaves with workers

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13209:
--
Assignee: John Smith

> replace slaves with workers
> ---
>
> Key: HADOOP-13209
> URL: https://issues.apache.org/jira/browse/HADOOP-13209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: John Smith
>Assignee: John Smith
> Attachments: HADOOP-13209.v01.patch, HADOOP-13209.v02.patch
>
>
> slaves.sh and the slaves file should get replace with workers.sh and a 
> workers file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13251) Authenticate with Kerberos credentials when renewing KMS delegation token

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352933#comment-15352933
 ] 

Allen Wittenauer commented on HADOOP-13251:
---

What happens when an AltKerberos implementation is used?  Has that been tested?

> Authenticate with Kerberos credentials when renewing KMS delegation token
> -
>
> Key: HADOOP-13251
> URL: https://issues.apache.org/jira/browse/HADOOP-13251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0
>
> Attachments: HADOOP-13251.01.patch, HADOOP-13251.02.patch, 
> HADOOP-13251.03.patch, HADOOP-13251.04.patch, HADOOP-13251.05.patch, 
> HADOOP-13251.06.patch, HADOOP-13251.07.patch, HADOOP-13251.08.patch, 
> HADOOP-13251.08.patch, HADOOP-13251.09.patch, HADOOP-13251.10.patch, 
> HADOOP-13251.innocent.patch
>
>
> Turns out KMS delegation token renewal feature (HADOOP-13155) does not work 
> well with client side impersonation.
> In a MR example, an end user (UGI:user) gets all kinds of DTs (with 
> renewer=yarn), and pass them to Yarn. Yarn's resource manager (UGI:yarn) then 
> renews these DTs as long as the MR jobs are running. But currently, the token 
> is used at the kms server side to decide the renewer, in which case is always 
> the token's owner. This ends up rejecting the renew request due to renewer 
> mismatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13324) s3a doesn't authenticate with S3 frankfurt (or other V4 auth only endpoints)

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352924#comment-15352924
 ] 

Hadoop QA commented on HADOOP-13324:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 39 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814109/HADOOP-13324-branch-2-001.patch
 |
| JIRA Issue | HADOOP-13324 |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  findbugs  checkstyle  |
| uname | Linux a13eda3df6f2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 1e34763 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| findbugs | v3.0.0 |
| whitespace | 

[jira] [Updated] (HADOOP-13324) s3a doesn't authenticate with S3 frankfurt (or other V4 auth only endpoints)

2016-06-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13324:

Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

> s3a doesn't authenticate with S3 frankfurt (or other V4 auth only endpoints)
> 
>
> Key: HADOOP-13324
> URL: https://issues.apache.org/jira/browse/HADOOP-13324
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13324-branch-2-001.patch, 
> HADOOP-13324-branch-2-001.patch
>
>
> S3A doesn't auth with S3 frankfurt. This installation only supports v4 API.
> There are some JVM options which should set this, but even they don't appear 
> to be enough. It appears that we have to allow the s3a client to change the 
> endpoint with which it authenticates from a generic "AWS S3" to a 
> frankfurt-specific one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13324) s3a doesn't authenticate with S3 frankfurt (or other V4 auth only endpoints)

2016-06-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13324:

Attachment: HADOOP-13324-branch-2-001.patch

patch 002, cut out tabs from index.md

> s3a doesn't authenticate with S3 frankfurt (or other V4 auth only endpoints)
> 
>
> Key: HADOOP-13324
> URL: https://issues.apache.org/jira/browse/HADOOP-13324
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13324-branch-2-001.patch, 
> HADOOP-13324-branch-2-001.patch
>
>
> S3A doesn't auth with S3 frankfurt. This installation only supports v4 API.
> There are some JVM options which should set this, but even they don't appear 
> to be enough. It appears that we have to allow the s3a client to change the 
> endpoint with which it authenticates from a generic "AWS S3" to a 
> frankfurt-specific one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13325) s3n fails to work with S3 Frankfurt or Seoul - 400 : Bad Request

2016-06-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13325:

Summary: s3n fails to work with S3 Frankfurt or Seoul - 400 : Bad Request  
(was: s3n fails to work with S3 Frankfurt or Seol - 400 : Bad Request)

> s3n fails to work with S3 Frankfurt or Seoul - 400 : Bad Request
> 
>
> Key: HADOOP-13325
> URL: https://issues.apache.org/jira/browse/HADOOP-13325
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Priority: Minor
>
> The Hadoop S3N {{s3n://}} client does not work with AWS S3 Frankfurt, S3 
> Seol. 
> This is because these S3 installations support the V4 signing API.
> S3A *does* work with them —filing this JIRA just to provide a searchable bug 
> report



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-06-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13252:

Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch, 
> HADOOP-13252-branch-2-003.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-06-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13252:

Attachment: HADOOP-13252-branch-2-003.patch

Patch 003; rebase to trunk and rebuild to trigger yetus off

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch, 
> HADOOP-13252-branch-2-003.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-06-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13252:

Status: Open  (was: Patch Available)

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2016-06-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352855#comment-15352855
 ] 

Steve Loughran commented on HADOOP-12588:
-

I'm still seeing failures here, such as in HADOOP-13323

If we can't stabilise this test, I think we should just cull it. It fails so 
often it is ignored, so if a regression did show up, we wouldn't catch it

> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12588.001.patch, HADOOP-12588.addendum.02.patch, 
> HADOOP-12588.addendum.03.patch, HADOOP-12588.addendum.patch
>
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2016-06-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-12588:
-

> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12588.001.patch, HADOOP-12588.addendum.02.patch, 
> HADOOP-12588.addendum.03.patch, HADOOP-12588.addendum.patch
>
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-06-28 Thread Sergey Mazin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352774#comment-15352774
 ] 

Sergey Mazin commented on HADOOP-13075:
---

Could you please share patch here? We would like to check it.

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13324) s3a doesn't authenticate with S3 frankfurt (or other V4 auth only endpoints)

2016-06-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13324:

Status: Open  (was: Patch Available)

pasted in stack trace retained the tabs; will fix

> s3a doesn't authenticate with S3 frankfurt (or other V4 auth only endpoints)
> 
>
> Key: HADOOP-13324
> URL: https://issues.apache.org/jira/browse/HADOOP-13324
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13324-branch-2-001.patch
>
>
> S3A doesn't auth with S3 frankfurt. This installation only supports v4 API.
> There are some JVM options which should set this, but even they don't appear 
> to be enough. It appears that we have to allow the s3a client to change the 
> endpoint with which it authenticates from a generic "AWS S3" to a 
> frankfurt-specific one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org