[jira] [Updated] (HADOOP-13879) Remove deprecated FileSystem.getServerDefaults()

2016-12-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13879:
---
Target Version/s: 3.0.0-alpha2
  Status: Patch Available  (was: Open)

> Remove deprecated FileSystem.getServerDefaults()
> 
>
> Key: HADOOP-13879
> URL: https://issues.apache.org/jira/browse/HADOOP-13879
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13879.01.patch
>
>
> FileSystem.getServerDefaults() was deprecated by HADOOP-8422 and the fix 
> version is 2.0.2-alpha. The API can be removed in Hadoop 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13879) Remove deprecated FileSystem.getServerDefaults()

2016-12-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13879:
---
Attachment: HADOOP-13879.01.patch

> Remove deprecated FileSystem.getServerDefaults()
> 
>
> Key: HADOOP-13879
> URL: https://issues.apache.org/jira/browse/HADOOP-13879
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13879.01.patch
>
>
> FileSystem.getServerDefaults() was deprecated by HADOOP-8422 and the fix 
> version is 2.0.2-alpha. The API can be removed in Hadoop 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13879) Remove deprecated FileSystem.getServerDefaults()

2016-12-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13879:
---
Description: FileSystem.getServerDefaults() was deprecated by HADOOP-8422 
and the fix version is 2.0.2-alpha. The API can be removed in Hadoop 3.

> Remove deprecated FileSystem.getServerDefaults()
> 
>
> Key: HADOOP-13879
> URL: https://issues.apache.org/jira/browse/HADOOP-13879
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>
> FileSystem.getServerDefaults() was deprecated by HADOOP-8422 and the fix 
> version is 2.0.2-alpha. The API can be removed in Hadoop 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13879) Remove deprecated FileSystem.getServerDefaults()

2016-12-08 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-13879:
--

 Summary: Remove deprecated FileSystem.getServerDefaults()
 Key: HADOOP-13879
 URL: https://issues.apache.org/jira/browse/HADOOP-13879
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13878) Remove the usage of long deprecated UTF8 class

2016-12-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13878:
---
Status: Open  (was: Patch Available)

Existing tests fail, cancelling the patch.

> Remove the usage of long deprecated UTF8 class
> --
>
> Key: HADOOP-13878
> URL: https://issues.apache.org/jira/browse/HADOOP-13878
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13878.01.patch, HADOOP-13878.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13824) FsShell can suppress the real error if no error message is present

2016-12-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734540#comment-15734540
 ] 

John Zhuge commented on HADOOP-13824:
-

compile and javac had the following error probably due to intermittent github 
connection issue. The same link works now.
{noformat}
bower datatables#~1.10.8   ECMDERR Failed to execute "git 
ls-remote --tags --heads https://github.com/DataTables/DataTables.git;, exit 
code of #128 fatal: unable to access 
'https://github.com/DataTables/DataTables.git/': Failed to connect to 
github.com port 443: Connection timed out
{noformat}

TestLambdaTestUtils failure is unrelated.

> FsShell can suppress the real error if no error message is present
> --
>
> Key: HADOOP-13824
> URL: https://issues.apache.org/jira/browse/HADOOP-13824
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1, 2.7.3
>Reporter: Rob Vesse
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-13824.001.patch, HADOOP-13824.002.patch, 
> HADOOP-13824.003.patch
>
>
> The {{FsShell}} error handling assumes in {{displayError()}} that the 
> {{message}} argument is not {{null}}. However in the case where it is this 
> leads to a NPE which results in suppressing the actual error information 
> since a higher level of error handling kicks in and just dumps the stack 
> trace of the NPE instead.
> e.g.
> {noformat}
> Exception in thread "main" java.lang.NullPointerException
>   at org.apache.hadoop.fs.FsShell.displayError(FsShell.java:304)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:289)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> {noformat}
> This is deeply unhelpful because depending on what the underlying error was 
> there may be no stack dumped/logged for it (as HADOOP-7114 provides) since 
> {{FsShell}} doesn't explicitly dump traces for {{IllegalArgumentException}} 
> which appears to be the underlying cause of my issue.  Line 289 is where 
> {{displayError()}} is called for {{IllegalArgumentException}} handling and 
> that catch clause does not log the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13878) Remove the usage of long deprecated UTF8 class

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734524#comment-15734524
 ] 

Hadoop QA commented on HADOOP-13878:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
13s{color} | {color:green} root generated 0 new + 699 unchanged - 17 fixed = 
699 total (was 716) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 317 unchanged - 1 fixed = 318 total (was 318) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 57s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.io.TestArrayPrimitiveWritable |
|   | hadoop.io.TestObjectWritableProtos |
|   | hadoop.io.TestArrayWritable |
|   | hadoop.io.TestEnumSetWritable |
| Timed out junit tests | org.apache.hadoop.metrics2.lib.TestMutableMetrics |
|   | org.apache.hadoop.io.TestSequenceFile |
|   | org.apache.hadoop.io.nativeio.TestNativeIO |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13878 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842495/HADOOP-13878.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4b412d79f01a 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7d8e440 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11228/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11228/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Updated] (HADOOP-13878) Remove the usage of long deprecated UTF8 class

2016-12-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13878:
---
Attachment: HADOOP-13878.02.patch

> Remove the usage of long deprecated UTF8 class
> --
>
> Key: HADOOP-13878
> URL: https://issues.apache.org/jira/browse/HADOOP-13878
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13878.01.patch, HADOOP-13878.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13878) Remove the usage of long deprecated UTF8 class

2016-12-08 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734435#comment-15734435
 ] 

Akira Ajisaka commented on HADOOP-13878:


Filed MAPREDUCE-6819 for mapreduce code change. I'll upload a patch without 
mapreduce change.

> Remove the usage of long deprecated UTF8 class
> --
>
> Key: HADOOP-13878
> URL: https://issues.apache.org/jira/browse/HADOOP-13878
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13878.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13878) Remove the usage of long deprecated UTF8 class

2016-12-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13878:
---
Target Version/s: 3.0.0-alpha2
  Status: Patch Available  (was: Open)

> Remove the usage of long deprecated UTF8 class
> --
>
> Key: HADOOP-13878
> URL: https://issues.apache.org/jira/browse/HADOOP-13878
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13878.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13878) Remove the usage of long deprecated UTF8 class

2016-12-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13878:
---
Attachment: HADOOP-13878.01.patch

> Remove the usage of long deprecated UTF8 class
> --
>
> Key: HADOOP-13878
> URL: https://issues.apache.org/jira/browse/HADOOP-13878
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13878.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13878) Remove the usage of long deprecated UTF8 class

2016-12-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-13878:
--

Assignee: Akira Ajisaka

> Remove the usage of long deprecated UTF8 class
> --
>
> Key: HADOOP-13878
> URL: https://issues.apache.org/jira/browse/HADOOP-13878
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13878) Remove the usage of long deprecated UTF8 class

2016-12-08 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-13878:
--

 Summary: Remove the usage of long deprecated UTF8 class
 Key: HADOOP-13878
 URL: https://issues.apache.org/jira/browse/HADOOP-13878
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Akira Ajisaka






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734222#comment-15734222
 ] 

Hadoop QA commented on HADOOP-13449:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
37s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
55s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 36s{color} | {color:orange} root: The patch generated 1 new + 8 unchanged - 
0 fixed = 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13449 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842477/HADOOP-13449-HADOOP-13345.013.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux d255e124893c 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734208#comment-15734208
 ] 

Hudson commented on HADOOP-13852:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10974 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10974/])
Revert "HADOOP-13852 hadoop build to allow hadoop version property to be 
(aajisaka: rev 7d8e440eee51562d0769efe04eb97256fe6061d1)
* (edit) BUILDING.txt
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
* (edit) 
hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
* (edit) hadoop-common-project/hadoop-common/pom.xml


> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13852:
---
Fix Version/s: (was: 3.0.0-alpha2)

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-13852:


Reverted this commit. Hi [~ste...@apache.org], would you move the setting to 
hadoop-project pom.xml?

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-08 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734188#comment-15734188
 ] 

Akira Ajisaka commented on HADOOP-13852:


It looks like the failures of TestRMWebServices and TestNMWebServices are 
related to this commit.

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-08 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734171#comment-15734171
 ] 

Konstantinos Karanasos commented on HADOOP-13852:
-

Thanks for the reply, [~ajisakaa].
I also found two more test classes that are failing currently on trunk, namely 
{{TestRMWebServices}} and {{TestNMWebServices}}. They are related to the last 
couple of days commits, but I did not manage to figure out which exact commit 
it was (since I was getting the failures only intermittently).

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-08 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734158#comment-15734158
 ] 

Akira Ajisaka edited comment on HADOOP-13852 at 12/9/16 3:30 AM:
-

{quote}
Hi, I think this commit broke the TestRMWebServicesNodes.
Any ideas what caused the problem and how we can fix it?
{quote}
{code:title=hadoop-common-project/hadoop-common/pom.xml}

${pom.version}
{code}
The setting is in hadoop-common module, but I think the setting should be in 
hadoop-project module. hadoop-common module is not a parent of hadoop-yarn 
module, so the test cannot use the above setting.


was (Author: ajisakaa):
{quote}
Hi, I think this commit broke the TestRMWebServicesNodes.
Any ideas what caused the problem and how we can fix it?
{quote}
{code:hadoop-common-project/hadoop-common/pom.xml}

${pom.version}
{code}
The setting is in hadoop-common module, but I think the setting should be in 
hadoop-project module. hadoop-common module is not a parent of hadoop-yarn 
module, so the test cannot use the above setting.

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-08 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734158#comment-15734158
 ] 

Akira Ajisaka commented on HADOOP-13852:


{quote}
Hi, I think this commit broke the TestRMWebServicesNodes.
Any ideas what caused the problem and how we can fix it?
{quote}
{code:hadoop-common-project/hadoop-common/pom.xml}

${pom.version}
{code}
The setting is in hadoop-common module, but I think the setting should be in 
hadoop-project module. hadoop-common module is not a parent of hadoop-yarn 
module, so the test cannot use the above setting.

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-12-08 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734138#comment-15734138
 ] 

Aaron Fabbri commented on HADOOP-13449:
---

Sounds good.  Thanks for analysis on that failure. I will do some review + 
testing with the v13 patch tonight and make sure we have updated JIRAs for any 
issues, including the createFakeDirectoryIfNecessary() thing.  I can commit the 
v13 patch in the morning if everyone is in favor and I don't find any new 
issues.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, 
> HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, 
> HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, 
> HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, 
> HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, 
> HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch, 
> HADOOP-13449-HADOOP-13345.013.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-12-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13449:
---
Attachment: HADOOP-13449-HADOOP-13345.013.patch

Thanks [~fabbri].

I saw you created two follow up JIRA, which are very good. We can address them 
separately. I also have the v13 patch, which can be addressed separately as 
well as the changes are kinda isolated to v12 patch. Specially, the v13 patch 
addresses the trivial checkstyle warning, and changes the region of DynamoDB 
table to create from region of S3Client to region of Bucket. I think it makes 
more sense.

For the failing integration test 
{{ITestS3AFileOperationCost#testFakeDirectoryDeletion}}, I found it fails with 
LocalMetadataStore as well, can you verify that? The reason it fails is because 
the number of create operations counter does not match after rename.
{code:title=ITestS3AFileOperationCost#testFakeDirectoryDeletion()}
fs.rename(srcFilePath, destFilePath);
state = "after rename(srcFilePath, destFilePath)";
directoriesCreated.assertDiffEquals(state, 1);<=== fail here
{code}
When the S3AFS renames a file (in this case), it will create fake parent 
directories after deleting the old file when necessary. This is guarded by 
{{createFakeDirectoryIfNecessary}}, which checks if the parent directory exits. 
{code:title=createFakeDirectoryIfNecessary() called by rename()}
  private void createFakeDirectoryIfNecessary(Path f)
  throws IOException, AmazonClientException {
String key = pathToKey(f);
if (!key.isEmpty() && !exists(f)) {   <=== only if nonexistent
  LOG.debug("Creating new fake directory at {}", f);
  createFakeDirectory(key);
}
  }
{code}

However, getFileStatus() for exists() will check the metadata store only for 
this. The metadatastore surely will return true and no fake directory will be 
created in S3 then.
{code}
  public S3AFileStatus getFileStatus(final Path f) throws IOException {
incrementStatistic(INVOCATION_GET_FILE_STATUS);
final Path path = qualify(f);
String key = pathToKey(path);
LOG.debug("Getting path status for {}  ({})", path , key);

// Check MetadataStore, if any.
PathMetadata pm = metadataStore.get(path);
if (pm != null) {
  // HADOOP-13760: handle deleted files, i.e. PathMetadata#isDeleted() here
  return (S3AFileStatus)pm.getFileStatus();
}
...
{code}
This seems a bug elesewhere and can also be addressed/investigated separately.


> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, 
> HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, 
> HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, 
> HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, 
> HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, 
> HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch, 
> HADOOP-13449-HADOOP-13345.013.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available

2016-12-08 Thread wenqingChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734085#comment-15734085
 ] 

wenqingChen commented on HADOOP-11627:
--

Can you help me make a iOS end-to-end encryption plugin?

> Remove io.native.lib.available
> --
>
> Key: HADOOP-11627
> URL: https://issues.apache.org/jira/browse/HADOOP-11627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0-alpha1
>Reporter: Akira Ajisaka
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
> HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
> HADOOP-11627-007.patch, HADOOP-11627-008.patch, HADOOP-11627-009.patch, 
> HADOOP-11627-010.patch, HADOOP-11627-011.patch, HADOOP-11627.patch
>
>
> According to the discussion in HADOOP-8642, we should remove 
> {{io.native.lib.available}} from trunk, and always use native libraries if 
> they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733929#comment-15733929
 ] 

Hadoop QA commented on HADOOP-13565:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-common-project/hadoop-auth: The patch 
generated 0 new + 0 unchanged - 28 fixed = 0 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
31s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842455/HADOOP-13565.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9df2e16e82c0 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 13d8e55 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11225/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11225/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects 

[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-08 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13565:

Attachment: HADOOP-13565.03.patch

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch, HADOOP-13565.01.patch, 
> HADOOP-13565.02.patch, HADOOP-13565.03.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-08 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13565:

Attachment: (was: HADOOP-13565.03.patch)

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch, HADOOP-13565.01.patch, 
> HADOOP-13565.02.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733838#comment-15733838
 ] 

Andrew Wang commented on HADOOP-11804:
--

We need to fix the precommit errors, but the test failures at least look 
unrelated.

Sean, anything else we should take care of first? If Avro and HBase work, then 
IMO we should put this in and iterate in trunk.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, 
> HADOOP-11804.11.patch, HADOOP-11804.12.patch, HADOOP-11804.13.patch, 
> HADOOP-11804.2.patch, HADOOP-11804.3.patch, HADOOP-11804.4.patch, 
> HADOOP-11804.5.patch, HADOOP-11804.6.patch, HADOOP-11804.7.patch, 
> HADOOP-11804.8.patch, HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-08 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13565:

Attachment: HADOOP-13565.03.patch

Thanks [~jnp] for the review. Update a new patch to address the comments.

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch, HADOOP-13565.01.patch, 
> HADOOP-13565.02.patch, HADOOP-13565.03.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem

2016-12-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733822#comment-15733822
 ] 

Andrew Wang commented on HADOOP-13055:
--

Thanks for working on this Zhe and Manoj. I gave the patch a look, had some 
high-level comments to go over first.

It looks like this is mostly config changes to allow mounting a filesystem on 
/. It doesn't enable nested mounts though, so if you mount an FS on /, that's 
all you get. This doesn't seem that useful, since the point of ViewFS is to 
combine multiple FS namespaces.

I interpret "linkMergeSlash" to mean having an FS mounted on /, but then allow 
the VFS mount table to override that. Essentially, nested mounts, special cased 
to mounting over the "/" mount.

Nested mounts are problematic though because we don't do path resolution fully 
on the client-side. This surfaces for recursive delete; once we've deferred it 
to a mounted filesystem, it won't pop back to VFS to get redirected to recurse 
through a nested mount. It won't behave like "rm -rf". I haven't looked at the 
full FS interface to see what else is out there.

My question is why "/" is called out as a special case for nested mounts. It 
doesn't seem easier to implement, and the semantic issues with recursive ops 
are still there. It's also different from a merge mount, in that merging merges 
multiple FSs, while IIUC this is mounting an FS *over* another FS.

[~sanjay.radia] could you provide any more historical perspective on the intent 
here?

HADOOP-8299 also provides a little hint from [~eli] in the description, where 
he refers to the idea of a "default NN" for a mount table. This lets us ignore 
the semantic differences with unix mounts, and simplifies the implementation as 
well (first try to resolve with the VFS mount table, then fallback to the 
default FS).

> Implement linkMergeSlash for ViewFileSystem
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2016-12-08 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-13877:
-

 Summary: S3Guard: fix TestDynamoDBMetadataStore when 
fs.s3a.s3guard.ddb.table is set
 Key: HADOOP-13877
 URL: https://issues.apache.org/jira/browse/HADOOP-13877
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: HADOOP-13345
Reporter: Aaron Fabbri
Assignee: Aaron Fabbri


I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
{{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.

I have a fix already, so I'll take this JIRA.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13876) S3Guard: better support for multi-bucket access including read-only

2016-12-08 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-13876:
-

 Summary: S3Guard: better support for multi-bucket access including 
read-only
 Key: HADOOP-13876
 URL: https://issues.apache.org/jira/browse/HADOOP-13876
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: HADOOP-13345
Reporter: Aaron Fabbri


HADOOP-13449 adds support for DynamoDBMetadataStore.

The code currently supports two options for choosing DynamoDB table names:
1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.

One of the issues is with accessing read-only buckets.  If a user accesses a 
read-only bucket with credentials that do not have DynamoDB write permissions, 
they will get errors when trying to access the read-only bucket.  This 
manifests causes test failures for {{ITestS3AAWSCredentialsProvider}}.

Goals for this JIRA:
- Fix {{ITestS3AAWSCredentialsProvider}} in a way that makes sense for the real 
use-case.
- Allow for a "one DynamoDB table per cluster" configuration with a way to 
chose which credentials are used for DynamoDB.
- Document limitations etc. in the s3guard.md site doc.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-12-08 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733756#comment-15733756
 ] 

Aaron Fabbri commented on HADOOP-13449:
---

I'm in the process of testing the v12 patch.  [~liuml07] and [~steve_l] I'd 
like to propose we get this patch in so we can split up the remaining work on 
DynamoDB.  I'm thinking the steps are:

1. I will finish testing and do a quick review on the v12 patch here.
2. I will open JIRAs for outstanding issues related to DynamoDB.
3. If you guys are +1 on this, I will commit the v12 (or latest) patch.

One possible concern is if we wanted to try and merge the HADOOP-13345 branch 
to trunk before DynamoDB support is finished (we'd talked about that to deal 
with code churn and allow things like working on parallel rename without 
needing to redo all the s3guard rename code, etc).  If we still wanted to to 
attempt this, let me know.



> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, 
> HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, 
> HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, 
> HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, 
> HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, 
> HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733747#comment-15733747
 ] 

Hadoop QA commented on HADOOP-11804:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client hadoop-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 10m 
44s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 44s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  9m 
44s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
16s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client 
hadoop-client-modules/hadoop-client-api 
hadoop-client-modules/hadoop-client-runtime 
hadoop-client-modules/hadoop-client-check-invariants 
hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants . 
hadoop-client-modules hadoop-dist {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} 
patch/hadoop-client-modules/hadoop-client-integration-tests no findbugs output 
file 
(hadoop-client-modules/hadoop-client-integration-tests/target/findbugsXml.xml) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 10s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
46s{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}225m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   

[jira] [Commented] (HADOOP-13809) hive: 'java.lang.IllegalStateException(zip file closed)'

2016-12-08 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733678#comment-15733678
 ] 

Wangda Tan commented on HADOOP-13809:
-

It looks related to one open Hive JIRA: 
https://issues.apache.org/jira/browse/HIVE-11681.

See the analysis: 
https://issues.apache.org/jira/browse/HIVE-11681?focusedCommentId=14736752=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14736752

> hive: 'java.lang.IllegalStateException(zip file closed)'
> 
>
> Key: HADOOP-13809
> URL: https://issues.apache.org/jira/browse/HADOOP-13809
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Adriano
>
> Randomly some of the hive queries are failing with the below exception on 
> HS2: 
> {code}
> 2016-11-07 02:36:40,996 ERROR org.apache.hadoop.hive.ql.exec.Task: 
> [HiveServer2-Background-Pool: Thread-1823748]: Ended Job = 
> job_1478336955303_31030 with exception 'java.lang.IllegalStateException(zip 
> file 
>  closed)' 
> java.lang.IllegalStateException: zip file closed 
> at java.util.zip.ZipFile.ensureOpen(ZipFile.java:634) 
> at java.util.zip.ZipFile.getEntry(ZipFile.java:305) 
> at java.util.jar.JarFile.getEntry(JarFile.java:227) 
> at sun.net.www.protocol.jar.URLJarFile.getEntry(URLJarFile.java:128) 
> at 
> sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:132) 
> at 
> sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150)
>  
> at 
> java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:233) 
> at javax.xml.parsers.SecuritySupport$4.run(SecuritySupport.java:94) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at 
> javax.xml.parsers.SecuritySupport.getResourceAsStream(SecuritySupport.java:87)
>  
> at 
> javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:283)
>  
> at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255) 
> at 
> javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
>  
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2526) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:982) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2032) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:484) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:474) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:210) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:596) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:594) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>  
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:594) 
> at 
> org.apache.hadoop.mapred.JobClient.getTaskReports(JobClient.java:665) 
> at 
> org.apache.hadoop.mapred.JobClient.getReduceTaskReports(JobClient.java:689) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:272)
>  
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549)
>  
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:435) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) 
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) 
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1770) 
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1527) 
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1306) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1115) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1108) 
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178)
>  
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)
>  
> at 
> org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)
>  
> at 

[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733671#comment-15733671
 ] 

Hadoop QA commented on HADOOP-11804:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client hadoop-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  9m 
25s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  9m 25s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 43s{color} | {color:orange} root: The patch generated 7 new + 0 unchanged - 
0 fixed = 7 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  8m 
29s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
15s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client 
hadoop-client-modules/hadoop-client-api . hadoop-client-modules 
hadoop-client-modules/hadoop-client-check-invariants 
hadoop-client-modules/hadoop-client-check-test-invariants 
hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-runtime hadoop-dist {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} 
patch/hadoop-client-modules/hadoop-client-integration-tests no findbugs output 
file 
(hadoop-client-modules/hadoop-client-integration-tests/target/findbugsXml.xml) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 12s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
49s{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}227m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit 

[jira] [Commented] (HADOOP-13824) FsShell can suppress the real error if no error message is present

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733656#comment-15733656
 ] 

Hadoop QA commented on HADOOP-13824:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 10m 
21s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 21s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 35 unchanged - 1 fixed = 35 total (was 36) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  8s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.test.TestLambdaTestUtils |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13824 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842418/HADOOP-13824.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a98df8f8c038 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 13d8e55 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11222/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11222/artifact/patchprocess/patch-compile-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11222/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11222/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11222/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FsShell can suppress the real error if no error message is present
> 

[jira] [Updated] (HADOOP-13868) New defaults for S3A multi-part configuration

2016-12-08 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13868:
---
Status: Patch Available  (was: Open)

> New defaults for S3A multi-part configuration
> -
>
> Key: HADOOP-13868
> URL: https://issues.apache.org/jira/browse/HADOOP-13868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha1, 2.7.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13868.001.patch, optimizing-multipart-s3a.sh
>
>
> I've been looking at a big performance regression when writing to S3 from 
> Spark that appears to have been introduced with HADOOP-12891.
> In the Amazon SDK, the default threshold for multi-part copies is 320x the 
> threshold for multi-part uploads (and the block size is 20x bigger), so I 
> don't think it's necessarily wise for us to have them be the same.
> I did some quick tests and it seems to me the sweet spot when multi-part 
> copies start being faster is around 512MB. It wasn't as significant, but 
> using 104857600 (Amazon's default) for the blocksize was also slightly better.
> I propose we do the following, although they're independent decisions:
> (1) Split the configuration. Ideally, I'd like to have 
> fs.s3a.multipart.copy.threshold and fs.s3a.multipart.upload.threshold (and 
> corresponding properties for the block size). But then there's the question 
> of what to do with the existing fs.s3a.multipart.* properties. Deprecation? 
> Leave it as a short-hand for configuring both (that's overridden by the more 
> specific properties?).
> (2) Consider increasing the default values. In my tests, 256 MB seemed to be 
> where multipart uploads came into their own, and 512 MB was where multipart 
> copies started outperforming the alternative. Would be interested to hear 
> what other people have seen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13868) New defaults for S3A multi-part configuration

2016-12-08 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13868:
---
Attachment: optimizing-multipart-s3a.sh
HADOOP-13868.001.patch

Attaching a patch with my proposed defaults, and a script I used to gather data 
(that assumes you've set BUCKET to the bucket to use and HADOOP to the path to 
the Hadoop executable to run) in case anyone wants to verify.

> New defaults for S3A multi-part configuration
> -
>
> Key: HADOOP-13868
> URL: https://issues.apache.org/jira/browse/HADOOP-13868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0, 3.0.0-alpha1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13868.001.patch, optimizing-multipart-s3a.sh
>
>
> I've been looking at a big performance regression when writing to S3 from 
> Spark that appears to have been introduced with HADOOP-12891.
> In the Amazon SDK, the default threshold for multi-part copies is 320x the 
> threshold for multi-part uploads (and the block size is 20x bigger), so I 
> don't think it's necessarily wise for us to have them be the same.
> I did some quick tests and it seems to me the sweet spot when multi-part 
> copies start being faster is around 512MB. It wasn't as significant, but 
> using 104857600 (Amazon's default) for the blocksize was also slightly better.
> I propose we do the following, although they're independent decisions:
> (1) Split the configuration. Ideally, I'd like to have 
> fs.s3a.multipart.copy.threshold and fs.s3a.multipart.upload.threshold (and 
> corresponding properties for the block size). But then there's the question 
> of what to do with the existing fs.s3a.multipart.* properties. Deprecation? 
> Leave it as a short-hand for configuring both (that's overridden by the more 
> specific properties?).
> (2) Consider increasing the default values. In my tests, 256 MB seemed to be 
> where multipart uploads came into their own, and 512 MB was where multipart 
> copies started outperforming the alternative. Would be interested to hear 
> what other people have seen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733612#comment-15733612
 ] 

Hadoop QA commented on HADOOP-13871:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 14 unchanged - 0 fixed = 15 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-tools_hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13871 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842424/HADOOP-13871-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 90df937809ab 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 13d8e55 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11223/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11223/artifact/patchprocess/whitespace-eol.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11223/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11223/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11223/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HADOOP-13868) New defaults for S3A multi-part configuration

2016-12-08 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13868:
---
Summary: New defaults for S3A multi-part configuration  (was: S3A should 
configure multi-part copies and uploads separately)

> New defaults for S3A multi-part configuration
> -
>
> Key: HADOOP-13868
> URL: https://issues.apache.org/jira/browse/HADOOP-13868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0, 3.0.0-alpha1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>
> I've been looking at a big performance regression when writing to S3 from 
> Spark that appears to have been introduced with HADOOP-12891.
> In the Amazon SDK, the default threshold for multi-part copies is 320x the 
> threshold for multi-part uploads (and the block size is 20x bigger), so I 
> don't think it's necessarily wise for us to have them be the same.
> I did some quick tests and it seems to me the sweet spot when multi-part 
> copies start being faster is around 512MB. It wasn't as significant, but 
> using 104857600 (Amazon's default) for the blocksize was also slightly better.
> I propose we do the following, although they're independent decisions:
> (1) Split the configuration. Ideally, I'd like to have 
> fs.s3a.multipart.copy.threshold and fs.s3a.multipart.upload.threshold (and 
> corresponding properties for the block size). But then there's the question 
> of what to do with the existing fs.s3a.multipart.* properties. Deprecation? 
> Leave it as a short-hand for configuring both (that's overridden by the more 
> specific properties?).
> (2) Consider increasing the default values. In my tests, 256 MB seemed to be 
> where multipart uploads came into their own, and 512 MB was where multipart 
> copies started outperforming the alternative. Would be interested to hear 
> what other people have seen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13863) Hadoop - Azure: Add a new SAS key mode for WASB.

2016-12-08 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733604#comment-15733604
 ] 

Mingliang Liu commented on HADOOP-13863:


{code}
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
[ERROR] COMPILATION ERROR :
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java:[978,53]
 cannot find symbol
  symbol:   class MockStorageInterface
  location: class org.apache.hadoop.fs.azure.AzureNativeFileSystemStore
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-azure: Compilation failure
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java:[978,53]
 cannot find symbol
[ERROR] symbol:   class MockStorageInterface
[ERROR] location: class org.apache.hadoop.fs.azure.AzureNativeFileSystemStore
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-azure
{code}

> Hadoop - Azure: Add a new SAS key mode for WASB.
> 
>
> Key: HADOOP-13863
> URL: https://issues.apache.org/jira/browse/HADOOP-13863
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure, fs/azure
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-13863.001.patch, WASB-SAS Key Mode-Design 
> Proposal.pdf
>
>
> Current implementation of WASB, only supports Azure storage keys and SAS key 
> being provided via org.apache.hadoop.conf.Configuration, which results in 
> these secrets residing in the same address space as the WASB process and 
> providing complete access to the Azure storage account and its containers. 
> Added to the fact that WASB does not inherently support ACL's, WASB is its 
> current implementation cannot be securely used for environments like secure 
> hadoop cluster. This JIRA is created to add a new mode in WASB, which 
> operates on Azure Storage SAS keys, which can provide fine grained timed 
> access to containers and blobs, providing a segway into supporting WASB for 
> secure hadoop cluster.
> More details about the issue and the proposal are provided in the design 
> proposal document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13868) S3A should configure multi-part copies and uploads separately

2016-12-08 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733602#comment-15733602
 ] 

Sean Mackrory commented on HADOOP-13868:


FYI, I was mistaken about the current defaults - looking at the wrong repo / 
branch. The default size is currently 100 MB and the threshold is 2 GB. I've 
done some more testing (I've covered all the US regions), and as long as I'm 
using a bucket in that region, I'm consistently seeing that around 128 MB or so 
is where multi-part uploads start being faster (even though that actual raw 
throughput can vary significantly). I also compared the time to upload 1GB 
using AWS CLI (24.8s), hadoop fs -cp with these settings (27.7s), and hadoop fs 
-cp with the current defaults (53.1s).

> S3A should configure multi-part copies and uploads separately
> -
>
> Key: HADOOP-13868
> URL: https://issues.apache.org/jira/browse/HADOOP-13868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0, 3.0.0-alpha1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>
> I've been looking at a big performance regression when writing to S3 from 
> Spark that appears to have been introduced with HADOOP-12891.
> In the Amazon SDK, the default threshold for multi-part copies is 320x the 
> threshold for multi-part uploads (and the block size is 20x bigger), so I 
> don't think it's necessarily wise for us to have them be the same.
> I did some quick tests and it seems to me the sweet spot when multi-part 
> copies start being faster is around 512MB. It wasn't as significant, but 
> using 104857600 (Amazon's default) for the blocksize was also slightly better.
> I propose we do the following, although they're independent decisions:
> (1) Split the configuration. Ideally, I'd like to have 
> fs.s3a.multipart.copy.threshold and fs.s3a.multipart.upload.threshold (and 
> corresponding properties for the block size). But then there's the question 
> of what to do with the existing fs.s3a.multipart.* properties. Deprecation? 
> Leave it as a short-hand for configuring both (that's overridden by the more 
> specific properties?).
> (2) Consider increasing the default values. In my tests, 256 MB seemed to be 
> where multipart uploads came into their own, and 512 MB was where multipart 
> copies started outperforming the alternative. Would be interested to hear 
> what other people have seen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13863) Hadoop - Azure: Add a new SAS key mode for WASB.

2016-12-08 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733601#comment-15733601
 ] 

Mingliang Liu commented on HADOOP-13863:


This patch does not apply.

{code}
978   if (!(this.storageInteractionLayer instanceof 
MockStorageInterface) && useSasKeyMode) {
979 connectToAzureStorageInSASKeyMode(accountName, containerName, 
sessionUri);
980 return;
981   }
{code}

Probing the implementation details is not ideal, not to mention it's a mocked 
class in tests.

> Hadoop - Azure: Add a new SAS key mode for WASB.
> 
>
> Key: HADOOP-13863
> URL: https://issues.apache.org/jira/browse/HADOOP-13863
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure, fs/azure
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-13863.001.patch, WASB-SAS Key Mode-Design 
> Proposal.pdf
>
>
> Current implementation of WASB, only supports Azure storage keys and SAS key 
> being provided via org.apache.hadoop.conf.Configuration, which results in 
> these secrets residing in the same address space as the WASB process and 
> providing complete access to the Azure storage account and its containers. 
> Added to the fact that WASB does not inherently support ACL's, WASB is its 
> current implementation cannot be securely used for environments like secure 
> hadoop cluster. This JIRA is created to add a new mode in WASB, which 
> operates on Azure Storage SAS keys, which can provide fine grained timed 
> access to containers and blobs, providing a segway into supporting WASB for 
> secure hadoop cluster.
> More details about the issue and the proposal are provided in the design 
> proposal document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2016-12-08 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733598#comment-15733598
 ] 

Aaron Fabbri commented on HADOOP-13345:
---

Looks like you did the commit *and* you also did a merge to update HADOOP-13345 
w/ trunk.  Thanks!

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2016-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733590#comment-15733590
 ] 

Steve Loughran commented on HADOOP-13345:
-

done

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-08 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733589#comment-15733589
 ] 

Konstantinos Karanasos commented on HADOOP-13852:
-

Hi, I think this commit broke the {{TestRMWebServicesNodes}}.
Any ideas what caused the problem and how we can fix it?

Thanks!

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733584#comment-15733584
 ] 

John Zhuge edited comment on HADOOP-13597 at 12/8/16 10:27 PM:
---

However, do realize that the last {{elif}} condition can be omitted and changed 
to {{else}}:
{code}
  if [[ -z "${oldval}" ]]; then
return
  elif [[ -z "${newvar}" ]]; then
hadoop_error "WARNING: ${oldvar} has been deprecated."
  elif [[ -n "${oldval}" && -n "${newvar}" ]]; then
{code}



was (Author: jzhuge):
However, do realize that the following {{elif}} condition can be omitted and 
changed to {{else}}:
{code}
  elif [[ -n "${oldval}" && -n "${newvar}" ]]; then
{code}


> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, 
> HADOOP-13597.003.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733584#comment-15733584
 ] 

John Zhuge commented on HADOOP-13597:
-

However, do realize that the following {{elif}} condition can be omitted and 
changed to {{else}}:
{code}
  elif [[ -n "${oldval}" && -n "${newvar}" ]]; then
{code}


> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, 
> HADOOP-13597.003.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-08 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733574#comment-15733574
 ] 

Jitendra Nath Pandey commented on HADOOP-13565:
---

For code that splits the principal to parse out different parts, it will be 
better to use {{KerberosName}} class. 
This should be a minor refactoring.
+1 otherwise.

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch, HADOOP-13565.01.patch, 
> HADOOP-13565.02.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733571#comment-15733571
 ] 

John Zhuge commented on HADOOP-13597:
-

[~aw] I tries to preserve exactly the same behavior as the existing 
{{hadoop_deprecate_envvar}} when {{newvar}} is not empty. Isn't {{-z}} the 
reverse of {{-n}}? Did I miss something?

The current code:
{code}
function hadoop_deprecate_envvar
{
  local oldvar=$1
  local newvar=$2
  local oldval=${!oldvar}
  local newval=${!newvar}

  if [[ -n "${oldval}" ]]; then  <<<
hadoop_error "WARNING: ${oldvar} has been replaced by ${newvar}. Using 
value of ${oldvar}."
# shellcheck disable=SC2086
eval ${newvar}=\"${oldval}\"

# shellcheck disable=SC2086
newval=${oldval}

# shellcheck disable=SC2086
eval ${newvar}=\"${newval}\"
  fi
}
{code}

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, 
> HADOOP-13597.003.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13871:

Status: Patch Available  (was: Open)

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, HADOOP-13871-002.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13871:

Attachment: HADOOP-13871-002.patch

This is clearly some network problem; I'm going reset everything locally and if 
that's not it, start escalating.

what I can do, given there's an observable perf problem, is improve detection, 
diagnostics and reaction to it.

This patch
# adds the ability to abort the current connection
# tracks bandwidth performance in the test, and aborts the connection if 
reading a 1MB block takes too long (the definition of long is hard coded)
# and fails the test if there are too many resets
# starts a troubleshooting doc, but doesn't yet link to it from the aws index 
page

I'd thought about measuring duration in all reads and providing bandwidth since 
last reset as an input stream statistic, but it gets complex. For now, avoiding.

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, HADOOP-13871-002.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13871:

Status: Open  (was: Patch Available)

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13824) FsShell can suppress the real error if no error message is present

2016-12-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13824:

Attachment: HADOOP-13824.003.patch

Patch 003:
* Wei-Chiu's comment

{{GenericTestUtils#assertMatches}} does regex not substring match. {{assertThat 
+ containStrings}} also prints expected and actual strings.

> FsShell can suppress the real error if no error message is present
> --
>
> Key: HADOOP-13824
> URL: https://issues.apache.org/jira/browse/HADOOP-13824
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1, 2.7.3
>Reporter: Rob Vesse
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-13824.001.patch, HADOOP-13824.002.patch, 
> HADOOP-13824.003.patch
>
>
> The {{FsShell}} error handling assumes in {{displayError()}} that the 
> {{message}} argument is not {{null}}. However in the case where it is this 
> leads to a NPE which results in suppressing the actual error information 
> since a higher level of error handling kicks in and just dumps the stack 
> trace of the NPE instead.
> e.g.
> {noformat}
> Exception in thread "main" java.lang.NullPointerException
>   at org.apache.hadoop.fs.FsShell.displayError(FsShell.java:304)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:289)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> {noformat}
> This is deeply unhelpful because depending on what the underlying error was 
> there may be no stack dumped/logged for it (as HADOOP-7114 provides) since 
> {{FsShell}} doesn't explicitly dump traces for {{IllegalArgumentException}} 
> which appears to be the underlying cause of my issue.  Line 289 is where 
> {{displayError()}} is called for {{IllegalArgumentException}} handling and 
> that catch clause does not log the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733487#comment-15733487
 ] 

Hadoop QA commented on HADOOP-13565:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-common-project/hadoop-auth: The patch 
generated 0 new + 0 unchanged - 28 fixed = 0 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
34s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842408/HADOOP-13565.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 76a0fa82277e 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 401c731 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11221/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11221/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects 

[jira] [Commented] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733471#comment-15733471
 ] 

Steve Loughran commented on HADOOP-13871:
-

Patch for trunk adds a new {{resetConnection()}} operation which aborts any 
active underlying connection, with this test modified to explicitly detect 
reads too slow and perform resets, failing if >8 resets happen during the run. 
This detects and reacts to bandwidth problems, albeit with a hard coded 
threshold of 128KB/s. If you are getting less than that, you shouldn't be 
running that test. Except: after the abort the TCP connection will be 
slow-starting up to full B/W, so on any-long haul link, that next read may also 
underperform. It's probably safe to have a lower value.

It would be possible to track bandwidth per

Example
{code}
2016-12-08 21:29:45,965 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:intOption(512)) - Value of fs.s3a.connection.maximum is 25
2016-12-08 21:29:45,967 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:intOption(512)) - Value of fs.s3a.attempts.maximum is 20
2016-12-08 21:29:45,967 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:intOption(512)) - Value of fs.s3a.connection.establish.timeout 
is 5000
2016-12-08 21:29:45,968 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:intOption(512)) - Value of fs.s3a.connection.timeout is 5000
2016-12-08 21:29:45,968 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:intOption(512)) - Value of fs.s3a.socket.send.buffer is 65536
2016-12-08 21:29:45,968 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:intOption(512)) - Value of fs.s3a.socket.recv.buffer is 32678
2016-12-08 21:29:45,970 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3ClientFactory.java:initUserAgent(187)) - Using User-Agent: Hadoop 
2.8.0-SNAPSHOT
2016-12-08 21:29:46,186 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:intOption(512)) - Value of fs.s3a.paging.maximum is 5000
2016-12-08 21:29:46,188 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:longBytesOption(555)) - Value of fs.s3a.block.size is 33554432
2016-12-08 21:29:46,188 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:longBytesOption(555)) - Value of fs.s3a.readahead.range is 65536
2016-12-08 21:29:46,190 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:intOption(512)) - Value of fs.s3a.max.total.tasks is 10
2016-12-08 21:29:46,190 [Thread-1] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:longOption(533)) - Value of fs.s3a.threads.keepalivetime is 60


2016-12-08 21:29:49,070 DEBUG amazonaws.request 
(AmazonHttpClient.java:logResponseRequestId(856)) - AWS Request ID: 
1FEDCD17B6FBB2C8
2016-12-08 21:29:49,381 DEBUG - Bytes in read #1: 16347 , block bytes: 16347, 
remaining in block: 1032229 duration=478865598 nS; ns/byte: 29293, 
bandwidth=0.032556 MB/s
2016-12-08 21:29:49,382 DEBUG - Bytes in read #2: 630 , block bytes: 16977, 
remaining in block: 1031599 duration=212331 nS; ns/byte: 337, 
bandwidth=2.829614 MB/s
2016-12-08 21:29:49,531 DEBUG - Bytes in read #3: 16347 , block bytes: 33324, 
remaining in block: 1015252 duration=149393770 nS; ns/byte: 9138, 
bandwidth=0.104353 MB/s
2016-12-08 21:29:49,532 DEBUG - Bytes in read #4: 1061 , block bytes: 34385, 
remaining in block: 1014191 duration=167998 nS; ns/byte: 158, 
bandwidth=6.022979 MB/s
2016-12-08 21:29:49,688 DEBUG - Bytes in read #5: 16347 , block bytes: 50732, 
remaining in block: 997844 duration=155828299 nS; ns/byte: 9532, 
bandwidth=0.100044 MB/s
2016-12-08 21:29:49,688 DEBUG - Bytes in read #6: 1061 , block bytes: 51793, 
remaining in block: 996783 duration=128858 nS; ns/byte: 121, bandwidth=7.852430 
MB/s
2016-12-08 21:29:49,841 DEBUG - Bytes in read #7: 16347 , block bytes: 68140, 
remaining in block: 980436 duration=152823750 nS; ns/byte: 9348, 
bandwidth=0.102011 MB/s
2016-12-08 21:29:49,842 DEBUG - Bytes in read #8: 1061 , block bytes: 69201, 
remaining in block: 979375 duration=280146 nS; ns/byte: 264, bandwidth=3.611861 
MB/s
2016-12-08 21:29:50,149 DEBUG - Bytes in read #9: 16347 , block bytes: 85548, 
remaining in block: 963028 duration=306567695 nS; ns/byte: 18753, 
bandwidth=0.050852 MB/s
2016-12-08 21:29:50,149 DEBUG - Bytes in read #10: 1061 , block bytes: 86609, 
remaining in block: 961967 duration=161057 nS; ns/byte: 151, bandwidth=6.282549 
MB/s
2016-12-08 21:29:51,073 DEBUG - Bytes in read #11: 16347 , block bytes: 102956, 
remaining in block: 945620 duration=923209136 nS; ns/byte: 56475, 
bandwidth=0.016886 MB/s
2016-12-08 21:29:51,073 DEBUG - Bytes in read #12: 1061 , block bytes: 104017, 
remaining in block: 944559 duration=172071 nS; ns/byte: 162, bandwidth=5.880412 
MB/s
2016-12-08 21:29:51,690 DEBUG - Bytes in read #13: 16347 , block bytes: 120364, 
remaining in block: 928212 duration=615842374 nS; ns/byte: 37673, 
bandwidth=0.025314 MB/s
2016-12-08 21:29:51,843 DEBUG - Bytes in read #14: 1061 , block bytes: 121425, 
remaining in block: 927151 duration=152502578 nS; ns/byte: 

[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-08 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13565:

Attachment: HADOOP-13565.02.patch

Update the patch fixing all the checkstyle issues. 

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch, HADOOP-13565.01.patch, 
> HADOOP-13565.02.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Status: In Progress  (was: Patch Available)

I think I can fix things so that we don't try to compile the integration tests 
in that module early. moving out of patch available while I try.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, 
> HADOOP-11804.11.patch, HADOOP-11804.12.patch, HADOOP-11804.13.patch, 
> HADOOP-11804.2.patch, HADOOP-11804.3.patch, HADOOP-11804.4.patch, 
> HADOOP-11804.5.patch, HADOOP-11804.6.patch, HADOOP-11804.7.patch, 
> HADOOP-11804.8.patch, HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733274#comment-15733274
 ] 

Steve Loughran commented on HADOOP-13871:
-

Video appears to show that everything is coming in out of order. 
https://youtu.be/4lJnknNtZNI

Having a tight timeout and expanded rx buffer isn't enough, as there's enough 
capacity for OOO packets to be buffered, so less discarded, hence pauses 
waiting for other bits of data to get resent.

This almost argues for a smaller Rx buffer so it blockes faster, triggering 
timeouts. But that's some bits of TCP there that I'm not knowledgeable of. 
After all: ooo packets do appear to be arriving, so the channel is live, just 
not delivering data to the caller.

We could collect effective read stats (as the test does) within the input 
stream...just do nanotime counts before and after each read, and build up 
long-term stats of the current stream, which can then be queried. although they 
could be aggregated, I tried that with the output and it doesn't work: if a 
network is full due to to many parallel connections, the effective bandwidth of 
each one is low; aggregating via just total bytes/total elapsed time doesn't 
work, as it generates false statistics implying very low bandwidth, rather than 
a saturated, shared, network link

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733235#comment-15733235
 ] 

Sean Busbey commented on HADOOP-11804:
--

ugh. okay the compile /javac failures are failing to run the test-compile goal 
directly, because the dependencies the integration test module needs don't 
exist until after the package phase.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, 
> HADOOP-11804.11.patch, HADOOP-11804.12.patch, HADOOP-11804.13.patch, 
> HADOOP-11804.2.patch, HADOOP-11804.3.patch, HADOOP-11804.4.patch, 
> HADOOP-11804.5.patch, HADOOP-11804.6.patch, HADOOP-11804.7.patch, 
> HADOOP-11804.8.patch, HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Attachment: HADOOP-11804.13.patch

-13

  - fixes checkstyle issues noted by test-patch

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, 
> HADOOP-11804.11.patch, HADOOP-11804.12.patch, HADOOP-11804.13.patch, 
> HADOOP-11804.2.patch, HADOOP-11804.3.patch, HADOOP-11804.4.patch, 
> HADOOP-11804.5.patch, HADOOP-11804.6.patch, HADOOP-11804.7.patch, 
> HADOOP-11804.8.patch, HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Status: Patch Available  (was: In Progress)

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, 
> HADOOP-11804.11.patch, HADOOP-11804.12.patch, HADOOP-11804.13.patch, 
> HADOOP-11804.2.patch, HADOOP-11804.3.patch, HADOOP-11804.4.patch, 
> HADOOP-11804.5.patch, HADOOP-11804.6.patch, HADOOP-11804.7.patch, 
> HADOOP-11804.8.patch, HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Status: In Progress  (was: Patch Available)

findbugs is a false negative caused by a test-file-only module. working on 
checkstyle fixes now.

the patch build failure looks like a maven version error

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, 
> HADOOP-11804.11.patch, HADOOP-11804.12.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch, HADOOP-11804.8.patch, 
> HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733173#comment-15733173
 ] 

Hadoop QA commented on HADOOP-11804:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client hadoop-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  9m 
25s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  9m 25s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 43s{color} | {color:orange} root: The patch generated 6 new + 0 unchanged - 
0 fixed = 6 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  8m 
28s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
16s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client 
hadoop-client-modules/hadoop-client-api . hadoop-client-modules 
hadoop-client-modules/hadoop-client-check-invariants 
hadoop-client-modules/hadoop-client-check-test-invariants 
hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-runtime hadoop-dist {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} 
patch/hadoop-client-modules/hadoop-client-integration-tests no findbugs output 
file 
(hadoop-client-modules/hadoop-client-integration-tests/target/findbugsXml.xml) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 45s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
47s{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit 

[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Attachment: HADOOP-11804.12.patch

-12

  - rebase to trunk (c265515)
  - updates shaded minicluster to add in the java services declarations we 
excluded from shaded runtime
  - integration test for webhdfs now passes

A limitation of the Java Services API (and that some of the services we have 
implementations for are part of Java servlets) is that folks using the shaded 
minicluster for testing will see our shaded implementations for a few APIs.

I believe as of this patch [~zhz]'s original problem should be solved. The 
previously attached example should also provide a manual way of verifying 
running against a live cluster.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, 
> HADOOP-11804.11.patch, HADOOP-11804.12.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch, HADOOP-11804.8.patch, 
> HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733129#comment-15733129
 ] 

Steve Loughran edited comment on HADOOP-13871 at 12/8/16 7:09 PM:
--

Also now seen on trunk. netstat shows the link is up, 

{code}
tcp4   0  0  192.168.1.12.55256 s3-us-west-2-r-w.https ESTABLISHED
{code}

and nettop shows inaction, though the rx_ooo counter seemed be incrementing at2 
2KB/s for a bit, before hanging completely
{code}


   state  packets_inbytes_in packets_out   
bytes_out   rx_duperx_ooo re-tx   rtt_avg   rtt_var   rcvsizetx_win 
P C R W
java.16828  
   24502  13 MiB   8
3507 B  37 KiB  4654 KiB 0 B
   tcp4 192.168.1.12:55256<->s3-us-west-2-r-w.amazonaws.com:443 
 Established   24502  13 MiB   8
3507 B  37 KiB  4654 KiB 0 B   185.31 ms  15.03 ms   256 KiB21 KiB 
- - - -
{code}

That's 4MB of OOO packets for 13 MB read, symptomatic of routing fun.

Then, suddenly, that TCP connection got closed (socket timeout) and a new one 
opened that went through the full dataset in a second or two
{code}

   state  packets_inbytes_in packets_out   
bytes_out   rx_duperx_ooo re-tx   rtt_avg   rtt_var   rcvsizetx_win 
P C R W
java.16828  
   41636  37 MiB  25
9210 B  37 KiB  4654 KiB 0 B
   tcp4 192.168.1.12:55256<->s3-us-west-2-r-w.amazonaws.com:443 
FinWait2   24502  13 MiB   9
3560 B  37 KiB  4654 KiB 0 B   184.16 ms  12.44 ms   256 KiB21 KiB 
- - - -

{code}


The really good news: curl is now suffering too. Which means its not a Java 
problem. Either the latop (which has been rebooted with SMC reset), or the rest 
of the network.
{code}

 $ curl -O https://landsat-pds.s3.amazonaws.com/scene_list.gz
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  5 37.4M5 2090k0 0  11824  0  0:55:21  0:03:01  0:52:20  7039

$ nettop -p 17105

   state  packets_inbytes_in packets_out   
bytes_out   rx_duperx_ooo re-tx   rtt_avg   rtt_var   rcvsizetx_win 
P C R W
curl.17105  
31782323 KiB   4 
482 B   10232 B 918 KiB 0 B
   tcp4 192.168.1.12:55731<->s3-us-west-2-w.amazonaws.com:443   
 Established31782323 KiB   4 
482 B   10232 B 918 KiB 0 B   173.56 ms  20.41 ms   256 KiB16 KiB - 
- - -

{code}
And on another attempt
{code}
 curl -O https://landsat-pds.s3.amazonaws.com/scene_list.gz
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 37.4M  100 37.4M0 0  4410k  0  0:00:08  0:00:08 --:--:-- 6382k

{code}

Conclusions


# sometimes over a network, we can get awful S3 read performance
# which goes away on a reconnect, including those detected by socket timeouts
# and which can be seen on other processes, so it's not a JVM/SDK problem
# which means that curl can be used as a probe independent of everything else; 
nettop giving more details

I'm going to try to set some more aggressive socket timeouts than 200 seconds. 
If it does address this, maybe we should consider having a smaller default. 

Also: time for that advanced troubleshooting document



was (Author: ste...@apache.org):
Also now seen on trunk. netstat shows the link is up, 

{code}
tcp4   0  0  192.168.1.12.55256 s3-us-west-2-r-w.https ESTABLISHED
{code}

and nettop shows inaction, though the rx_ooo counter seemed be incrementing a2 
2KB/s for a bit, before handing completely
{code}


   state  packets_inbytes_in packets_out   
bytes_out   rx_duperx_ooo re-tx   rtt_avg   rtt_var   rcvsizetx_win 
P C R W
java.16828  
   24502  13 MiB   8
3507 B

[jira] [Commented] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733129#comment-15733129
 ] 

Steve Loughran commented on HADOOP-13871:
-

Also now seen on trunk. netstat shows the link is up, 

{code}
tcp4   0  0  192.168.1.12.55256 s3-us-west-2-r-w.https ESTABLISHED
{code}

and nettop shows inaction, though the rx_ooo counter seemed be incrementing a2 
2KB/s for a bit, before handing completely
{code}


   state  packets_inbytes_in packets_out   
bytes_out   rx_duperx_ooo re-tx   rtt_avg   rtt_var   rcvsizetx_win 
P C R W
java.16828  
   24502  13 MiB   8
3507 B  37 KiB  4654 KiB 0 B
   tcp4 192.168.1.12:55256<->s3-us-west-2-r-w.amazonaws.com:443 
 Established   24502  13 MiB   8
3507 B  37 KiB  4654 KiB 0 B   185.31 ms  15.03 ms   256 KiB21 KiB 
- - - -
{code}

That's 4MB of OOO packets for 13 MB read, symptomatic of routing fun.

Then, suddenly, that TCP connection got closed (socket timeout) and a new one 
opened that went through the full dataset in a second or two
{code}

   state  packets_inbytes_in packets_out   
bytes_out   rx_duperx_ooo re-tx   rtt_avg   rtt_var   rcvsizetx_win 
P C R W
java.16828  
   41636  37 MiB  25
9210 B  37 KiB  4654 KiB 0 B
   tcp4 192.168.1.12:55256<->s3-us-west-2-r-w.amazonaws.com:443 
FinWait2   24502  13 MiB   9
3560 B  37 KiB  4654 KiB 0 B   184.16 ms  12.44 ms   256 KiB21 KiB 
- - - -

{code}


The really good news: curl is now suffering too. Which means its not a Java 
problem. Either the latop (which has been rebooted with SMC reset), or the rest 
of the network.
{code}

 $ curl -O https://landsat-pds.s3.amazonaws.com/scene_list.gz
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  5 37.4M5 2090k0 0  11824  0  0:55:21  0:03:01  0:52:20  7039

$ nettop -p 17105

   state  packets_inbytes_in packets_out   
bytes_out   rx_duperx_ooo re-tx   rtt_avg   rtt_var   rcvsizetx_win 
P C R W
curl.17105  
31782323 KiB   4 
482 B   10232 B 918 KiB 0 B
   tcp4 192.168.1.12:55731<->s3-us-west-2-w.amazonaws.com:443   
 Established31782323 KiB   4 
482 B   10232 B 918 KiB 0 B   173.56 ms  20.41 ms   256 KiB16 KiB - 
- - -

{code}
And on another attempt
{code}
 curl -O https://landsat-pds.s3.amazonaws.com/scene_list.gz
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 37.4M  100 37.4M0 0  4410k  0  0:00:08  0:00:08 --:--:-- 6382k

{code}

Conclusions


# sometimes over a network, we can get awful S3 read performance
# which goes away on a reconnect, including those detected by socket timeouts
# and which can be seen on other processes, so it's not a JVM/SDK problem
# which means that curl can be used as a probe independent of everything else; 
nettop giving more details

I'm going to try to set some more aggressive socket timeouts than 200 seconds. 
If it does address this, maybe we should consider having a smaller default. 

Also: time for that advanced troubleshooting document


> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The 

[jira] [Comment Edited] (HADOOP-13873) log DNS addresses on s3a init

2016-12-08 Thread Andres Perez (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733093#comment-15733093
 ] 

Andres Perez edited comment on HADOOP-13873 at 12/8/16 7:00 PM:


Would adding this to {{S3AFileSystem#initialize}} be enough?
{code}
...
  bucket = name.getHost();

  if(LOG.isDebugEnabled()) {
LOG.debug("Bucket endpoint: "
+ InetAddress.getByName(bucket).toString());
  }
...
{code}


was (Author: aaperezl):
Would adding this to {{S3AFileSystem#initialize}} be enough:
{code}
...
  bucket = name.getHost();

  if(LOG.isDebugEnabled()) {
LOG.debug("Bucket endpoint: "
+ InetAddress.getByName(bucket).toString());
  }
...
{code}

> log DNS addresses on s3a init
> -
>
> Key: HADOOP-13873
> URL: https://issues.apache.org/jira/browse/HADOOP-13873
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> HADOOP-13871 has shown that network problems can kill perf, and that it's v. 
> hard to track down, even if you turn up the logging in hadoop.fs.s3a and 
> com.amazon layers to debug.
> we could maybe improve things by printing out the IPAddress of the s3 
> endpoint, as that could help with the network tracing. Printing from within 
> hadoop shows the one given to S3a, not a different one returned by any load 
> balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13873) log DNS addresses on s3a init

2016-12-08 Thread Andres Perez (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733093#comment-15733093
 ] 

Andres Perez commented on HADOOP-13873:
---

Would adding this to {{S3AFileSystem#initialize}} be enough:
{code}
...
  bucket = name.getHost();

  if(LOG.isDebugEnabled()) {
LOG.debug("Bucket endpoint: "
+ InetAddress.getByName(bucket).toString());
  }
...
{code}

> log DNS addresses on s3a init
> -
>
> Key: HADOOP-13873
> URL: https://issues.apache.org/jira/browse/HADOOP-13873
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> HADOOP-13871 has shown that network problems can kill perf, and that it's v. 
> hard to track down, even if you turn up the logging in hadoop.fs.s3a and 
> com.amazon layers to debug.
> we could maybe improve things by printing out the IPAddress of the s3 
> endpoint, as that could help with the network tracing. Printing from within 
> hadoop shows the one given to S3a, not a different one returned by any load 
> balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733061#comment-15733061
 ] 

Hadoop QA commented on HADOOP-13565:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-common-project/hadoop-auth: The patch 
generated 36 new + 17 unchanged - 11 fixed = 53 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
33s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839099/HADOOP-13565.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8cb94f4d5284 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c265515 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11218/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-auth.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11218/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11218/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: 

[jira] [Commented] (HADOOP-13875) HttpServer2 should support more SSL configuration properties

2016-12-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733048#comment-15733048
 ] 

John Zhuge commented on HADOOP-13875:
-

HADOOP-13597 will add a new method {{HttpServer2$Builder#oadSSLConfiguration}}. 
Load more SSL configuration properties there.

> HttpServer2 should support more SSL configuration properties
> 
>
> Key: HADOOP-13875
> URL: https://issues.apache.org/jira/browse/HADOOP-13875
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Support more SSL configuration properties:
> - enabled.protocols
> - includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13875) HttpServer2 should support more SSL configuration properties

2016-12-08 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13875:
---

 Summary: HttpServer2 should support more SSL configuration 
properties
 Key: HADOOP-13875
 URL: https://issues.apache.org/jira/browse/HADOOP-13875
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge


Support more SSL configuration properties:
- enabled.protocols
- includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13871:

Affects Version/s: 3.0.0-alpha2
   2.9.0
  Description: 
The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
15s on branch-2, but is now taking minutes.

This is a regression, and it's surfacing on some internal branches too. Even 
where the code hasn't changed. -It does not happen on branch-2, which has a 
later SDK.-



  was:
The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
15s on branch-2, but is now taking minutes on branch-2.8.

This is a regression, and it's surfacing on some internal branches too. Even 
where the code hasn't changed. -It does not happen on branch-2, which has a 
later SDK.-




> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13852:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

fixed in trunk; also added a couple of lines into BUILDING.TXT about how to do 
this.

(+ added {{-Dmaven.javadoc.skip=true}} as an option in {{mvn package}}; 
everyone should know that)

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13874) TestSSLHttpServer failures

2016-12-08 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13874:
---

 Summary: TestSSLHttpServer failures
 Key: HADOOP-13874
 URL: https://issues.apache.org/jira/browse/HADOOP-13874
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Critical


All exceptions look like "Cannot support ... with currently installed 
providers". I am running Centos 7.2.1511 and native enabled.
{noformat}
Tests run: 5, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 1.593 sec <<< 
FAILURE! - in org.apache.hadoop.http.TestSSLHttpServer
testExclusiveEnabledCiphers(org.apache.hadoop.http.TestSSLHttpServer)  Time 
elapsed: 0.012 sec  <<< ERROR!
java.lang.IllegalArgumentException: Cannot support 
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA with currently installed providers
at sun.security.ssl.CipherSuiteList.(CipherSuiteList.java:92)
at 
sun.security.ssl.SSLSocketImpl.setEnabledCipherSuites(SSLSocketImpl.java:2461)
at 
org.apache.hadoop.http.TestSSLHttpServer$PrefferedCipherSSLSocketFactory.createSocket(TestSSLHttpServer.java:269)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:436)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1513)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
at 
org.apache.hadoop.http.TestSSLHttpServer.testExclusiveEnabledCiphers(TestSSLHttpServer.java:227)

testOneEnabledCiphers(org.apache.hadoop.http.TestSSLHttpServer)  Time elapsed: 
0.004 sec  <<< ERROR!
java.lang.IllegalArgumentException: Cannot support 
TLS_ECDHE_RSA_WITH_RC4_128_SHA with currently installed providers
at sun.security.ssl.CipherSuiteList.(CipherSuiteList.java:92)
at 
sun.security.ssl.SSLSocketImpl.setEnabledCipherSuites(SSLSocketImpl.java:2461)
at 
org.apache.hadoop.http.TestSSLHttpServer$PrefferedCipherSSLSocketFactory.createSocket(TestSSLHttpServer.java:269)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:436)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1513)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
at 
org.apache.hadoop.http.TestSSLHttpServer.testOneEnabledCiphers(TestSSLHttpServer.java:200)

testExcludedCiphers(org.apache.hadoop.http.TestSSLHttpServer)  Time elapsed: 
0.015 sec  <<< ERROR!
java.lang.IllegalArgumentException: Cannot support 
TLS_ECDHE_RSA_WITH_RC4_128_SHA with currently installed providers
at sun.security.ssl.CipherSuiteList.(CipherSuiteList.java:92)
at 
sun.security.ssl.SSLSocketImpl.setEnabledCipherSuites(SSLSocketImpl.java:2461)
at 
org.apache.hadoop.http.TestSSLHttpServer$PrefferedCipherSSLSocketFactory.createSocket(TestSSLHttpServer.java:269)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:436)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1513)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
at 
org.apache.hadoop.http.TestSSLHttpServer.testExcludedCiphers(TestSSLHttpServer.java:176)
{noformat}

My source tree sync'd to:
{noformat}
9ef89ed HDFS-11140. Directory Scanner should log startup message time 
correctly. Contributed by Yiqun Lin.
{noformat}

My SSL environment:
{noformat}
$ curl -sS https://www.howsmyssl.com/a/check | python -m json.tool
{
"able_to_detect_n_minus_one_splitting": false,
"beast_vuln": false,
"ephemeral_keys_supported": true,
"given_cipher_suites": [
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",

[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732973#comment-15732973
 ] 

Hudson commented on HADOOP-13852:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10971 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10971/])
HADOOP-13852 hadoop build to allow hadoop version property to be (stevel: rev 
c2655157257079b8541d71bb1e5b6cbae75561ff)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
* (edit) 
hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
* (edit) BUILDING.txt
* (edit) hadoop-common-project/hadoop-common/pom.xml


> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732944#comment-15732944
 ] 

Allen Wittenauer commented on HADOOP-13597:
---

{code}
+  if [[ -z "${oldval}" ]]; then
+return
{code}

This is an even worse side effect since it means that we don't promote old 
values into new ones when the vars have been renamed.  (There are a lot of 
them!)

Also, where are the unit tests for the bash functions you added?



> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, 
> HADOOP-13597.003.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-08 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13565:

Status: Patch Available  (was: Reopened)

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch, HADOOP-13565.01.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2016-12-08 Thread Luke Miner (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732908#comment-15732908
 ] 

Luke Miner commented on HADOOP-13811:
-

Sorry to be a pain about this. Would it be possible for you to share a prebuilt 
version with me? I'd love to get this fixed!

> s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to 
> sanitize XML document destined for handler class
> -
>
> Key: HADOOP-13811
> URL: https://issues.apache.org/jira/browse/HADOOP-13811
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Sometimes, occasionally, getFileStatus() fails with a stack trace starting 
> with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document 
> destined for handler class}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732830#comment-15732830
 ] 

John Zhuge commented on HADOOP-13597:
-

Thanks [~jojochuang] for the review !

bq. There's one bit that might cause confusion in deployment. The fact that 
keystore password could come from either environment variable, from 
configuration file or credential files (via Configuration#getPassword) make me 
feel a bit uneasy. If the password comes from a credential file, it will also 
need to ProviderUtils.excludeIncompatibleCredentialProviders in order to trim 
credential files on HdfsFileSystems.

Very good point! I will engage wider discussion. As a precaution, I could 
revert to the existing KMS approach which does not consult credential provider 
and file a separate JIRA to integrate with credential provider.

bq. It seems the KMS server is not Kerberized. You might want to construct a 
HttpServer2 object with extra options to enable Kerberos:

KMS uses {{KMSAuthenticationFilter}} specified in web.xml instead of the 
generic {{AuthenticationFilter}} by HttpServer2.

bq. When KMSWebServer starts/stops, it should print corresponding message using 
StringUtils.startupShutdownMessage. This will make supporters' life easier.

Will do.

bq. I think you can not assume the admin user is user.name=kms when accessing 
the servlets such as jmx, loglevel, etc. Also, need to make sure access 
permission is guarded properly accessing these servlets.

Don't think existing KMS supports admin user. Will add this feature.

bq. I am not sure how existing KMS handles truststore and its password, but I 
think you might be missing something in the new KMS when handling truststore 
and its password.

Truststore password is obsoleted by HADOOP-13864.

bq. The keystore password comes from configuration key 
hadoop.security.keystore.java-keystore-provider.password-file. If I understand 
ConfigRedact correctly it doesn't seem to redact this specific configuration 
key to me. Could you double check?

This key is for the password file, not password, so it does not have to be 
redacted.

bq. In Configuration#getPasswordString, please print name if there's an 
IOException to log. Also, should it catch IOException and return null? If it 
looks for a password but is unable to, would it be easier to let the exception 
be thrown? It could be a troubleshooting nightmare I imagine.

Good point. {{getPasswordString}} seems unnecessary. Remove it.

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, 
> HADOOP-13597.003.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13824) FsShell can suppress the real error if no error message is present

2016-12-08 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732752#comment-15732752
 ] 

Wei-Chiu Chuang commented on HADOOP-13824:
--

Forgot to mention I am +1 for the patch and will commit soon.

> FsShell can suppress the real error if no error message is present
> --
>
> Key: HADOOP-13824
> URL: https://issues.apache.org/jira/browse/HADOOP-13824
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1, 2.7.3
>Reporter: Rob Vesse
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-13824.001.patch, HADOOP-13824.002.patch
>
>
> The {{FsShell}} error handling assumes in {{displayError()}} that the 
> {{message}} argument is not {{null}}. However in the case where it is this 
> leads to a NPE which results in suppressing the actual error information 
> since a higher level of error handling kicks in and just dumps the stack 
> trace of the NPE instead.
> e.g.
> {noformat}
> Exception in thread "main" java.lang.NullPointerException
>   at org.apache.hadoop.fs.FsShell.displayError(FsShell.java:304)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:289)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> {noformat}
> This is deeply unhelpful because depending on what the underlying error was 
> there may be no stack dumped/logged for it (as HADOOP-7114 provides) since 
> {{FsShell}} doesn't explicitly dump traces for {{IllegalArgumentException}} 
> which appears to be the underlying cause of my issue.  Line 289 is where 
> {{displayError()}} is called for {{IllegalArgumentException}} handling and 
> that catch clause does not log the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732620#comment-15732620
 ] 

Hadoop QA commented on HADOOP-13871:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13871 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842366/HADOOP-13871-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 98b111417282 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0ef7961 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11217/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11217/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, 
> 

[jira] [Updated] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13871:

Status: Patch Available  (was: In Progress)

tested, s3 ireland. All works well except this scale test

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes on branch-2.8.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13871:

Attachment: HADOOP-13871-001.patch

Patch 001: log time, bytes/ns and bandwidth of every read. This is what's 
generating the detailed log info in the attachments; it shows how performance 
can time out and then, on a new HTTP connect, fly

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes on branch-2.8.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13871 started by Steve Loughran.
---
> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes on branch-2.8.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Status: Patch Available  (was: In Progress)

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, 
> HADOOP-11804.11.patch, HADOOP-11804.2.patch, HADOOP-11804.3.patch, 
> HADOOP-11804.4.patch, HADOOP-11804.5.patch, HADOOP-11804.6.patch, 
> HADOOP-11804.7.patch, HADOOP-11804.8.patch, HADOOP-11804.9.patch, 
> hadoop-11804-client-test.tar.gz
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-12-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Attachment: HADOOP-11804.11.patch

-11
  - adds an integration test for using the shaded minicluster
  - fixes some found gaps in dependencies for runtime v minicluster
  - fixes a typo in the relocation of javax.servlet for minicluster

known limitation: the integration test I added for webhdfs access currently 
fails. I believe this is the failure [~zhz] was originally referring to. I'm 
still trying to figure out a workable solution.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, 
> HADOOP-11804.11.patch, HADOOP-11804.2.patch, HADOOP-11804.3.patch, 
> HADOOP-11804.4.patch, HADOOP-11804.5.patch, HADOOP-11804.6.patch, 
> HADOOP-11804.7.patch, HADOOP-11804.8.patch, HADOOP-11804.9.patch, 
> hadoop-11804-client-test.tar.gz
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732357#comment-15732357
 ] 

Steve Loughran commented on HADOOP-13871:
-

And it is replicable with the latest SDK. This at least implies its not an SDK 
problem; some local environment or networking issue
{code}
2016-12-08 14:23:21,981 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(135)) 
- http-outgoing-3 >> GET /scene_list.gz HTTP/1.1
2016-12-08 14:23:21,981 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) 
- http-outgoing-3 >> Host: landsat-pds.s3-us-west-2.amazonaws.com
2016-12-08 14:23:21,981 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) 
- http-outgoing-3 >> x-amz-content-sha256: UNSIGNED-PAYLOAD
2016-12-08 14:23:21,981 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) 
- http-outgoing-3 >> Authorization: AWS4-HMAC-SHA256 
Credential=AKIAIYZ5JQOW3N5H6NPA/20161208/us-west-2/s3/aws4_request, 
SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
 Signature=26e0566a3ef87493309d56eac330d20d2a071010ded7cbff9886e7c7cef1bd86
2016-12-08 14:23:21,981 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) 
- http-outgoing-3 >> X-Amz-Date: 20161208T142321Z
2016-12-08 14:23:21,981 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) 
- http-outgoing-3 >> User-Agent: Hadoop 2.8.0-SNAPSHOT, aws-sdk-java/1.11.45 
Mac_OS_X/10.12.1 Java_HotSpot(TM)_64-Bit_Server_VM/25.102-b14/1.8.0_102
2016-12-08 14:23:21,981 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) 
- http-outgoing-3 >> amz-sdk-invocation-id: ddf8d5b5-96a3-57a4-96d6-fbb66f63f6fa
2016-12-08 14:23:21,981 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) 
- http-outgoing-3 >> amz-sdk-retry: 0/0/500
2016-12-08 14:23:21,982 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) 
- http-outgoing-3 >> Range: bytes=0-39234217
2016-12-08 14:23:21,982 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) 
- http-outgoing-3 >> Content-Type: application/octet-stream
2016-12-08 14:23:21,982 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) 
- http-outgoing-3 >> Connection: Keep-Alive
2016-12-08 14:23:22,301 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(124)) 
- http-outgoing-3 << HTTP/1.1 206 Partial Content
2016-12-08 14:23:22,301 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) 
- http-outgoing-3 << x-amz-id-2: 
SykItR7JBcTf1TsNkVK58VYN2164t1IDsMa+ZpLUBGuJW+Bxb9ONEAHFvi85JhhGvLZJxXlwc3k=
2016-12-08 14:23:22,301 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) 
- http-outgoing-3 << x-amz-request-id: 0AE8E8AD401F45D6
2016-12-08 14:23:22,301 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) 
- http-outgoing-3 << Date: Thu, 08 Dec 2016 14:23:23 GMT
2016-12-08 14:23:22,301 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) 
- http-outgoing-3 << Last-Modified: Thu, 08 Dec 2016 11:32:05 GMT
2016-12-08 14:23:22,302 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) 
- http-outgoing-3 << ETag: "00585bfc6fa4c4295c5a0073f7fa6922"
2016-12-08 14:23:22,302 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) 
- http-outgoing-3 << Accept-Ranges: bytes
2016-12-08 14:23:22,302 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) 
- http-outgoing-3 << Content-Range: bytes 0-39234216/39234217
2016-12-08 14:23:22,302 [JUnit-testTimeToOpenAndR

[jira] [Updated] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13871:

Description: 
The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
15s on branch-2, but is now taking minutes on branch-2.8.

This is a regression, and it's surfacing on some internal branches too. Even 
where the code hasn't changed. -It does not happen on branch-2, which has a 
later SDK.-



  was:
The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
15s on branch-2, but is now taking minutes on branch-2.8.

This is a regression, and it's surfacing on some internal branches too. Even 
where the code hasn't changed. It does not happen on branch-2, which has a 
later SDK.




> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes on branch-2.8.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13871:

Attachment: 
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt

Attaching log with detail on every single read which takes place. For the 
initial operation, its taking hundreds of nS/byte, until eventually there's a 
socket timeout and a reconnect, after which I get expected performance.

Tested over wifi as well as ether

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes on branch-2.8.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. It does not happen on branch-2, which has a 
> later SDK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13867) FilterFileSystem should override rename(.., options) to take effect of Rename options called via FilterFileSystem implementations

2016-12-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732291#comment-15732291
 ] 

Hudson commented on HADOOP-13867:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10970 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10970/])
HADOOP-13867. FilterFileSystem should override rename(.., options) to (brahma: 
rev 0ef796174ecb5383f79cfecfcbfc4f309d093cd7)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java


> FilterFileSystem should override rename(.., options) to take effect of Rename 
> options called via FilterFileSystem implementations
> -
>
> Key: HADOOP-13867
> URL: https://issues.apache.org/jira/browse/HADOOP-13867
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13867-01.patch
>
>
> HDFS-8312 Added Rename.TO_TRASH option to add a security check before moving 
> to trash.
> But for FilterFileSystem implementations since this rename(..options) is not 
> overridden, it uses default FileSystem implementation where Rename.TO_TRASH 
> option is not delegated to NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12476) TestDNS.testLookupWithoutHostsFallback failing

2016-12-08 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732277#comment-15732277
 ] 

Ewan Higgs commented on HADOOP-12476:
-

I think I found the issue. My Ubuntu box is/was running bind9 so it can resolve 
it's own dns requests. When we pretend that we can't hit a dns server, things 
are still getting resolved anyway.

OS X seems to be working, so maybe ignore what I said before about that.

> TestDNS.testLookupWithoutHostsFallback failing
> --
>
> Key: HADOOP-12476
> URL: https://issues.apache.org/jira/browse/HADOOP-12476
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net, test
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Presumably triggered by HADOOP-12449, one of the jenkins patch runs has 
> failed in {{TestDNS.testLookupWithoutHostsFallback}}, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12476) TestDNS.testLookupWithoutHostsFallback failing

2016-12-08 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732239#comment-15732239
 ] 

Ewan Higgs commented on HADOOP-12476:
-

{quote}However I am not sure why it fails at all. Wei-Chiu Chuang, have you 
seen it fail outside of Jenkins?{quote}
This fails for me outside Jenkins. 

I'm running this on two machines: Ubuntu 16.10 and OS X El Capitan. It fails on 
Oracle 8 and OpenJDK 8.

It also fails on trunk and branch-2.8. (branch-2.7 doesn't have the same test). 
I also checked out the first commit with the test (df31c446) and it failed 
(pointing java to 8 or 7).

> TestDNS.testLookupWithoutHostsFallback failing
> --
>
> Key: HADOOP-12476
> URL: https://issues.apache.org/jira/browse/HADOOP-12476
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net, test
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Presumably triggered by HADOOP-12449, one of the jenkins patch runs has 
> failed in {{TestDNS.testLookupWithoutHostsFallback}}, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13867) FilterFileSystem should override rename(.., options) to take effect of Rename options called via FilterFileSystem implementations

2016-12-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-13867:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk,branch-2..Thanks [~vinayrpet] for contribution and 
[~andrew.wang] for review.

> FilterFileSystem should override rename(.., options) to take effect of Rename 
> options called via FilterFileSystem implementations
> -
>
> Key: HADOOP-13867
> URL: https://issues.apache.org/jira/browse/HADOOP-13867
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13867-01.patch
>
>
> HDFS-8312 Added Rename.TO_TRASH option to add a security check before moving 
> to trash.
> But for FilterFileSystem implementations since this rename(..options) is not 
> overridden, it uses default FileSystem implementation where Rename.TO_TRASH 
> option is not delegated to NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732220#comment-15732220
 ] 

Steve Loughran commented on HADOOP-13871:
-

it looks really suspiciously like that ~60s delay per read is somehow being 
triggered by blocking somewhere in the stack, with the connection releases 
triggering the completion. Or: it's unrelated events, and that all that's 
happened is that the original connections set up are now being released.

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance on branch-2.8 awful
> ---
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes on branch-2.8.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. It does not happen on branch-2, which has a 
> later SDK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732215#comment-15732215
 ] 

Steve Loughran edited comment on HADOOP-13871 at 12/8/16 1:32 PM:
--

Logging org.apache.http 
{code}
2016-12-08 13:26:20,458 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
s3a.S3AFileSystem (S3AInputStream.java:reopen(140)) - 
reopen(s3a://landsat-pds/scene_list.gz) for read from new offset 
range[0-39234217], length=1048576, streamPosition=0, nextReadPosition=0
2016-12-08 13:26:20,460 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
amazonaws.request (AmazonHttpClient.java:executeOneRequest(671)) - Sending 
Request: GET https://landsat-pds.s3.amazonaws.com /scene_list.gz Headers: 
(User-Agent: Hadoop 2.8.0-SNAPSHOT, aws-sdk-java/1.10.6 Mac_OS_X/10.12.1 
Java_HotSpot(TM)_64-Bit_Server_VM/25.102-b14/1.8.0_102, Range: 
bytes=0-39234217, Content-Type: application/x-www-form-urlencoded; 
charset=utf-8, ) 
2016-12-08 13:26:20,460 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
conn.PoolingClientConnectionManager 
(PoolingClientConnectionManager.java:requestConnection(184)) - Connection 
request: [route: {s}->https://landsat-pds.s3.amazonaws.com:443][total kept 
alive: 1; route allocated: 1 of 25; total allocated: 1 of 25]
2016-12-08 13:26:20,461 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
conn.PoolingClientConnectionManager 
(PoolingClientConnectionManager.java:leaseConnection(218)) - Connection leased: 
[id: 1][route: {s}->https://landsat-pds.s3.amazonaws.com:443][total kept alive: 
0; route allocated: 1 of 25; total allocated: 1 of 25]
2016-12-08 13:26:20,462 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
protocol.RequestAddCookies (RequestAddCookies.java:process(122)) - CookieSpec 
selected: default
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
protocol.RequestAuthCache (RequestAuthCache.java:process(76)) - Auth cache not 
set in the context
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
protocol.RequestProxyAuthentication 
(RequestProxyAuthentication.java:process(88)) - Proxy auth state: UNCHALLENGED
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
conn.DefaultClientConnection 
(DefaultClientConnection.java:sendRequestHeader(276)) - Sending request: GET 
/scene_list.gz HTTP/1.1
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
/scene_list.gz HTTP/1.1
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 
landsat-pds.s3.amazonaws.com
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
Authorization: *REMOVED*
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
User-Agent: Hadoop 2.8.0-SNAPSHOT, aws-sdk-java/1.10.6 Mac_OS_X/10.12.1 
Java_HotSpot(TM)_64-Bit_Server_VM/25.102-b14/1.8.0_102
2016-12-08 13:26:20,464 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: 
bytes=0-39234217
2016-12-08 13:26:20,464 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> Date: 
Thu, 08 Dec 2016 13:26:20 GMT
2016-12-08 13:26:20,464 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
Content-Type: application/x-www-form-urlencoded; charset=utf-8
2016-12-08 13:26:20,464 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
Connection: Keep-Alive
2016-12-08 13:26:20,643 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
conn.DefaultClientConnection 
(DefaultClientConnection.java:receiveResponseHeader(261)) - Receiving response: 
HTTP/1.1 206 Partial Content
2016-12-08 13:26:20,643 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:receiveResponseHeader(264)) - << 
HTTP/1.1 206 Partial Content
2016-12-08 13:26:20,643 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:receiveResponseHeader(267)) - << 
x-amz-id-2: 
cZjR9+rI+ZlDKmRWEkFmnCQmj0p7jeF9c5/kXVKeM5oKLTQRf0rQOfR1ipw5r0lnmPbfknnj+o8=
2016-12-08 13:26:20,643 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:receiveResponseHeader(267)) - << 
x-amz-request-id: 71429013F16577FD
2016-12-08 13:26:20,643 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:receiveResponseHeader(267)) - << 
Date: Thu, 

[jira] [Commented] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful

2016-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732215#comment-15732215
 ] 

Steve Loughran commented on HADOOP-13871:
-

Logging org.apache.http shows why things are taking just over 60s: something is 
pausing and its only when the connection is closed that things return
{code}
2016-12-08 13:26:20,458 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
s3a.S3AFileSystem (S3AInputStream.java:reopen(140)) - 
reopen(s3a://landsat-pds/scene_list.gz) for read from new offset 
range[0-39234217], length=1048576, streamPosition=0, nextReadPosition=0
2016-12-08 13:26:20,460 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
amazonaws.request (AmazonHttpClient.java:executeOneRequest(671)) - Sending 
Request: GET https://landsat-pds.s3.amazonaws.com /scene_list.gz Headers: 
(User-Agent: Hadoop 2.8.0-SNAPSHOT, aws-sdk-java/1.10.6 Mac_OS_X/10.12.1 
Java_HotSpot(TM)_64-Bit_Server_VM/25.102-b14/1.8.0_102, Range: 
bytes=0-39234217, Content-Type: application/x-www-form-urlencoded; 
charset=utf-8, ) 
2016-12-08 13:26:20,460 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
conn.PoolingClientConnectionManager 
(PoolingClientConnectionManager.java:requestConnection(184)) - Connection 
request: [route: {s}->https://landsat-pds.s3.amazonaws.com:443][total kept 
alive: 1; route allocated: 1 of 25; total allocated: 1 of 25]
2016-12-08 13:26:20,461 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
conn.PoolingClientConnectionManager 
(PoolingClientConnectionManager.java:leaseConnection(218)) - Connection leased: 
[id: 1][route: {s}->https://landsat-pds.s3.amazonaws.com:443][total kept alive: 
0; route allocated: 1 of 25; total allocated: 1 of 25]
2016-12-08 13:26:20,462 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
protocol.RequestAddCookies (RequestAddCookies.java:process(122)) - CookieSpec 
selected: default
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
protocol.RequestAuthCache (RequestAuthCache.java:process(76)) - Auth cache not 
set in the context
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
protocol.RequestProxyAuthentication 
(RequestProxyAuthentication.java:process(88)) - Proxy auth state: UNCHALLENGED
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
conn.DefaultClientConnection 
(DefaultClientConnection.java:sendRequestHeader(276)) - Sending request: GET 
/scene_list.gz HTTP/1.1
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
/scene_list.gz HTTP/1.1
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 
landsat-pds.s3.amazonaws.com
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
Authorization: *REMOVED*
2016-12-08 13:26:20,463 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
User-Agent: Hadoop 2.8.0-SNAPSHOT, aws-sdk-java/1.10.6 Mac_OS_X/10.12.1 
Java_HotSpot(TM)_64-Bit_Server_VM/25.102-b14/1.8.0_102
2016-12-08 13:26:20,464 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: 
bytes=0-39234217
2016-12-08 13:26:20,464 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> Date: 
Thu, 08 Dec 2016 13:26:20 GMT
2016-12-08 13:26:20,464 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
Content-Type: application/x-www-form-urlencoded; charset=utf-8
2016-12-08 13:26:20,464 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
Connection: Keep-Alive
2016-12-08 13:26:20,643 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
conn.DefaultClientConnection 
(DefaultClientConnection.java:receiveResponseHeader(261)) - Receiving response: 
HTTP/1.1 206 Partial Content
2016-12-08 13:26:20,643 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:receiveResponseHeader(264)) - << 
HTTP/1.1 206 Partial Content
2016-12-08 13:26:20,643 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:receiveResponseHeader(267)) - << 
x-amz-id-2: 
cZjR9+rI+ZlDKmRWEkFmnCQmj0p7jeF9c5/kXVKeM5oKLTQRf0rQOfR1ipw5r0lnmPbfknnj+o8=
2016-12-08 13:26:20,643 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers (DefaultClientConnection.java:receiveResponseHeader(267)) - << 
x-amz-request-id: 71429013F16577FD
2016-12-08 13:26:20,643 [JUnit-testTimeToOpenAndReadWholeFileBlocks] DEBUG 
http.headers 

[jira] [Commented] (HADOOP-13869) using HADOOP_USER_CLASSPATH_FIRST inconsistently

2016-12-08 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732081#comment-15732081
 ] 

Fei Hui commented on HADOOP-13869:
--

Thanks [~ste...@apache.org]
having set Affects versions 

> using HADOOP_USER_CLASSPATH_FIRST inconsistently
> 
>
> Key: HADOOP-13869
> URL: https://issues.apache.org/jira/browse/HADOOP-13869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13869.001.patch
>
>
> I find HADOOP_USER_CLASSPATH_FIRST is used inconsistently. Somewhere set it 
> true, somewhere set it yes.
> I know it doesn't mattter because it affects classpath once 
> HADOOP_USER_CLASSPATH_FIRST is not empty
> BUT Maybe it's better that using  HADOOP_USER_CLASSPATH_FIRST uniformly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13869) using HADOOP_USER_CLASSPATH_FIRST inconsistently

2016-12-08 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-13869:
-
Affects Version/s: 3.0.0-alpha2
 Target Version/s: 3.0.0-alpha2

> using HADOOP_USER_CLASSPATH_FIRST inconsistently
> 
>
> Key: HADOOP-13869
> URL: https://issues.apache.org/jira/browse/HADOOP-13869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13869.001.patch
>
>
> I find HADOOP_USER_CLASSPATH_FIRST is used inconsistently. Somewhere set it 
> true, somewhere set it yes.
> I know it doesn't mattter because it affects classpath once 
> HADOOP_USER_CLASSPATH_FIRST is not empty
> BUT Maybe it's better that using  HADOOP_USER_CLASSPATH_FIRST uniformly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >