[jira] [Commented] (HADOOP-15767) [JDK10] Building native package on JDK10 fails due to missing javah

2018-09-17 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618557#comment-16618557
 ] 

Takanobu Asanuma commented on HADOOP-15767:
---

Thanks for the information, [~ajisakaa]. I will follow it.

> [JDK10] Building native package on JDK10 fails due to missing javah
> ---
>
> Key: HADOOP-15767
> URL: https://issues.apache.org/jira/browse/HADOOP-15767
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>
> This is the error log.
> {noformat}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:native-maven-plugin:1.0-alpha-8:javah (default) on project 
> hadoop-common: Error running javah command: Error executing command line. 
> Exit code:127 -> [Help 1]
> {noformat}
> See also: https://github.com/mojohaus/maven-native/issues/17



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15764) [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15764:
---
Status: Open  (was: Patch Available)

> [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement
> -
>
> Key: HADOOP-15764
> URL: https://issues.apache.org/jira/browse/HADOOP-15764
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15764.01.patch
>
>
> In JDK10, sun.net.dns.ResolverConfiguration is encapsulated and not 
> accessible from unnamed modules. This issue is to remove the usage of 
> ResolverConfiguration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15764) [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618549#comment-16618549
 ] 

Akira Ajisaka commented on HADOOP-15764:


https://builds.apache.org/job/PreCommit-HADOOP-Build/15213/artifact/out/patch-shadedclient.txt
{noformat}[ERROR] Found artifact with unexpected contents: 
'/testptch/hadoop/hadoop-client-modules/hadoop-client-runtime/target/hadoop-client-runtime-3.2.0-SNAPSHOT.jar'
 Please check the following and either correct the build or update the allowed 
list with reasoning. jnamed$3.class lookup.class update.class dig.class 
jnamed.class jnamed$1.class jnamed$2.class{noformat}
The above error is related to the patch.

> [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement
> -
>
> Key: HADOOP-15764
> URL: https://issues.apache.org/jira/browse/HADOOP-15764
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15764.01.patch
>
>
> In JDK10, sun.net.dns.ResolverConfiguration is encapsulated and not 
> accessible from unnamed modules. This issue is to remove the usage of 
> ResolverConfiguration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15767) [JDK10] Building native package on JDK10 fails due to missing javah

2018-09-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618545#comment-16618545
 ] 

Akira Ajisaka commented on HADOOP-15767:


As Apache Accumulo did 
(https://github.com/apache/accumulo/commit/4bd3afa2c997fdcb35d54b8be32b703385f3b5c1#diff-27da03a9af4d8a1ab61b98bce67375fc),
 replacing maven native plugin with maven compiler plugin seems fine.

> [JDK10] Building native package on JDK10 fails due to missing javah
> ---
>
> Key: HADOOP-15767
> URL: https://issues.apache.org/jira/browse/HADOOP-15767
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>
> This is the error log.
> {noformat}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:native-maven-plugin:1.0-alpha-8:javah (default) on project 
> hadoop-common: Error running javah command: Error executing command line. 
> Exit code:127 -> [Help 1]
> {noformat}
> See also: https://github.com/mojohaus/maven-native/issues/17



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-09-17 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618542#comment-16618542
 ] 

Sunil Govindan commented on HADOOP-15407:
-

Hi [~ste...@apache.org] [~tmarquardt] [~mackrorysd] [~DanielZhou]

Thanks for closing many items in this and starting Vote thread.

You have mentioned in mail thread that this feature can work independently and 
has no impacts to other module. However vote + merge needs another week time as 
per my understanding. Could you please confirm the stability of this feature 
and any potential risk in going with 3.2 release train ?

Because we were planning for 3.2 branch cut this week and we might need to 
delay this for a week atleast to get this feature in. As long as you feel the 
feature is functional (barring improvements for next dot releases), has no 
major impacts and could merge in a week, I am fine in getting this in.

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Da Zhou
>Priority: Blocker
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, HADOOP-15407-008.patch, 
> HADOOP-15407-HADOOP-15407-008.patch, HADOOP-15407-HADOOP-15407.006.patch, 
> HADOOP-15407-HADOOP-15407.007.patch, HADOOP-15407-HADOOP-15407.008.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, 

[jira] [Created] (HADOOP-15767) [JDK10] Building native package on JDK10 fails due to missing javah

2018-09-17 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HADOOP-15767:
-

 Summary: [JDK10] Building native package on JDK10 fails due to 
missing javah
 Key: HADOOP-15767
 URL: https://issues.apache.org/jira/browse/HADOOP-15767
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


This is the error log.
{noformat}
[ERROR] Failed to execute goal 
org.codehaus.mojo:native-maven-plugin:1.0-alpha-8:javah (default) on project 
hadoop-common: Error running javah command: Error executing command line. Exit 
code:127 -> [Help 1]
{noformat}

See also: https://github.com/mojohaus/maven-native/issues/17



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15764) [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement

2018-09-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618537#comment-16618537
 ] 

Hadoop QA commented on HADOOP-15764:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
13s{color} | {color:green} root generated 0 new + 1333 unchanged - 2 fixed = 
1333 total (was 1335) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 26 unchanged - 1 fixed = 26 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m  
4s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
41s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15764 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940125/HADOOP-15764.01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux f6aa8a692a68 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ee051ef |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15213/testReport/ |
| Max. process+thread count | 1363 (vs. ulimit of 1) |
| modules | C

[jira] [Commented] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618523#comment-16618523
 ] 

Hadoop QA commented on HADOOP-15719:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
43s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15719 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940129/HADOOP-15719-HADOOP-15407-004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 60b1e20dbc88 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15407 / b4c2304 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15214/testReport/ |
| Max. process+thread count | 358 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15214/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fail-fast when using OAuth over http
> 
>
>

[jira] [Issue Comment Deleted] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15304:
---
Comment: was deleted

(was: The latest version of wro4j-maven-plugin does not support JDK 9. 
https://github.com/wro4j/wro4j/issues/1039)

> [JDK10] Migrate from com.sun.tools.doclets to the replacement
> -
>
> Key: HADOOP-15304
> URL: https://issues.apache.org/jira/browse/HADOOP-15304
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15304.01.patch, HADOOP-15304.02.patch, 
> HADOOP-15304.03.patch
>
>
> com.sun.tools.doclets.* packages were removed in Java 10. 
> [https://bugs.openjdk.java.net/browse/JDK-8177511]
> This causes hadoop-annotations module to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Compilation failure: Compilation failure:
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618507#comment-16618507
 ] 

Akira Ajisaka commented on HADOOP-15304:


The latest version of wro4j-maven-plugin does not support JDK 9. 
https://github.com/wro4j/wro4j/issues/1039

> [JDK10] Migrate from com.sun.tools.doclets to the replacement
> -
>
> Key: HADOOP-15304
> URL: https://issues.apache.org/jira/browse/HADOOP-15304
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15304.01.patch, HADOOP-15304.02.patch, 
> HADOOP-15304.03.patch
>
>
> com.sun.tools.doclets.* packages were removed in Java 10. 
> [https://bugs.openjdk.java.net/browse/JDK-8177511]
> This causes hadoop-annotations module to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Compilation failure: Compilation failure:
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15766) mvn package -Pyarn-ui fails in JDK9

2018-09-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15766:
--

 Summary: mvn package -Pyarn-ui fails in JDK9
 Key: HADOOP-15766
 URL: https://issues.apache.org/jira/browse/HADOOP-15766
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


{{mvn package -Pdist,native,yarn-ui -Dtar -DskipTests}} failed on trunk with 
Java 9.0.4.
{noformat}
[ERROR] Failed to execute goal ro.isdc.wro4j:wro4j-maven-plugin:1.7.9:run 
(default) on project hadoop-yarn-ui: Execution default of goal 
ro.isdc.wro4j:wro4j-maven-plugin:1.7.9:run failed: An API incompatibility was 
encountered while executing ro.isdc.wro4j:wro4j-maven-plugin:1.7.9:run: 
java.lang.ExceptionInInitializerError: null
[ERROR] -
[ERROR] realm =plugin>ro.isdc.wro4j:wro4j-maven-plugin:1.7.9
[ERROR] strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy
[ERROR] urls[0] = 
file:/home/aajisaka/.m2/repository/ro/isdc/wro4j/wro4j-maven-plugin/1.7.9/wro4j-maven-plugin-1.7.9.jar
[ERROR] urls[1] = 
file:/home/aajisaka/.m2/repository/ro/isdc/wro4j/wro4j-core/1.7.9/wro4j-core-1.7.9.jar
[ERROR] urls[2] = 
file:/home/aajisaka/.m2/repository/org/apache/commons/commons-lang3/3.4/commons-lang3-3.4.jar
(snip)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-17 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618496#comment-16618496
 ] 

ASF GitHub Bot commented on HADOOP-15741:
-

Github user aajisaka commented on the pull request:


https://github.com/apache/hadoop/commit/281c192e7f917545151117fc7e067dc480b93499#commitcomment-30540671
  
https://issues.apache.org/jira/browse/HADOOP-15741


> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15741.1.patch, HADOOP-15741.2.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15765) Can not find login module class for IBM due to hard codes

2018-09-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618494#comment-16618494
 ] 

Hadoop QA commented on HADOOP-15765:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
30s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  9m 
51s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 33s{color} 
| {color:red} root generated 178 new + 1157 unchanged - 0 fixed = 1335 total 
(was 1157) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 81 unchanged - 0 fixed = 84 total (was 81) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15765 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940118/HADOOP-15765_000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f1c044f1c3ad 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ee051ef |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15212/artifact/out/branch-compile-root.txt
 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15212/artifact/out/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15212/artifa

[jira] [Commented] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618492#comment-16618492
 ] 

Akira Ajisaka commented on HADOOP-15741:


{quote}Repository: hadoop
Updated Branches:
  refs/heads/trunk 8b2f5e60f -> 281c192e7


[JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1.

Signed-off-by: Akira Ajisaka {quote}

I forgot to add "HADOOP-15741" to the commit message. Sorry for that.


> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15741.1.patch, HADOOP-15741.2.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11124) Java 9 removes/hides Java internal classes

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-11124.

Resolution: Done

All the related tasks were fixed. Closing this.

> Java 9 removes/hides Java internal classes
> --
>
> Key: HADOOP-11124
> URL: https://issues.apache.org/jira/browse/HADOOP-11124
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Priority: Major
> Attachments: JDK Internal API Usage Report for hadoop-2.5.1.html
>
>
> Java 9 removes various internal classes; adapt the code to this.
> It should be possible to switch to code that works on Java7+, yet which 
> adapts to the changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618483#comment-16618483
 ] 

Da Zhou commented on HADOOP-15719:
--

[~tmarquardt] good catch, the test case can pass even without the fix is 
because there will be another IllegalArgumentException when initializing ABFS 
using the fake configuration. 
Submitting patch 004.
I've updated the test to expect IllegalArgumentException explicitly and check 
the message content contained in IllegalArgumentException.

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch, HADOOP-15719-HADOOP-15407-003.patch, 
> HADOOP-15719-HADOOP-15407-004.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Attachment: HADOOP-15719-HADOOP-15407-004.patch

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch, HADOOP-15719-HADOOP-15407-003.patch, 
> HADOOP-15719-HADOOP-15407-004.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15764) [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15764:
---
Status: Patch Available  (was: Open)

01 patch: Replaced with dnsjava.

> [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement
> -
>
> Key: HADOOP-15764
> URL: https://issues.apache.org/jira/browse/HADOOP-15764
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15764.01.patch
>
>
> In JDK10, sun.net.dns.ResolverConfiguration is encapsulated and not 
> accessible from unnamed modules. This issue is to remove the usage of 
> ResolverConfiguration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15764) [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15764:
---
Attachment: HADOOP-15764.01.patch

> [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement
> -
>
> Key: HADOOP-15764
> URL: https://issues.apache.org/jira/browse/HADOOP-15764
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15764.01.patch
>
>
> In JDK10, sun.net.dns.ResolverConfiguration is encapsulated and not 
> accessible from unnamed modules. This issue is to remove the usage of 
> ResolverConfiguration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15765) Can not find login module class for IBM due to hard codes

2018-09-17 Thread Jianfei Jiang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HADOOP-15765:
---
Affects Version/s: 3.0.3

> Can not find login module class for IBM due to hard codes
> -
>
> Key: HADOOP-15765
> URL: https://issues.apache.org/jira/browse/HADOOP-15765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.3
>Reporter: Jianfei Jiang
>Priority: Major
> Attachments: HADOOP-15765_000.patch
>
>
> As the differences between various versions of IBM, the login module class is 
> sometimes different. However, the class for specified jdk (no matter the 
> version) is hard coded in Hadoop code. We have faced the error like following:
> *javax.security.auth.login.LoginException: unable to find LoginModule class: 
> com.ibm.security.auth.module.LinuxLoginModule*
>  
> Should we set the value as a config which can be set by users?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15764) [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-15764:
--

Assignee: Akira Ajisaka

> [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement
> -
>
> Key: HADOOP-15764
> URL: https://issues.apache.org/jira/browse/HADOOP-15764
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> In JDK10, sun.net.dns.ResolverConfiguration is encapsulated and not 
> accessible from unnamed modules. This issue is to remove the usage of 
> ResolverConfiguration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15765) Can not find login module class for IBM due to hard codes

2018-09-17 Thread Jianfei Jiang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HADOOP-15765:
---
Status: Patch Available  (was: Open)

> Can not find login module class for IBM due to hard codes
> -
>
> Key: HADOOP-15765
> URL: https://issues.apache.org/jira/browse/HADOOP-15765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Jianfei Jiang
>Priority: Major
> Attachments: HADOOP-15765_000.patch
>
>
> As the differences between various versions of IBM, the login module class is 
> sometimes different. However, the class for specified jdk (no matter the 
> version) is hard coded in Hadoop code. We have faced the error like following:
> *javax.security.auth.login.LoginException: unable to find LoginModule class: 
> com.ibm.security.auth.module.LinuxLoginModule*
>  
> Should we set the value as a config which can be set by users?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-09-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618426#comment-16618426
 ] 

Hadoop QA commented on HADOOP-15304:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
56m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-annotations in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15304 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940106/HADOOP-15304.03.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 4fefda192e8e 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 281c192 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15211/testReport/ |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-annotations hadoop-project-dist U: 
. |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15211/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically gen

[jira] [Commented] (HADOOP-15765) Can not find login module class for IBM due to hard codes

2018-09-17 Thread Jianfei Jiang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618425#comment-16618425
 ] 

Jianfei Jiang commented on HADOOP-15765:


I add a patch, it just make the value of [[os.login.module.name]] and 
[[os.principal.class]] in class [[UserGroupInformation]] configurable.

 

However in class [[KerberosAuthenticator]], it also have similiar codes and 
should change to configurable, in this class, there is no given conf and cannot 
load Configuration like UserGroupInformation do. One way to make the value in 
KerberosAuthenticator is to set system environment, I dont think it is a good 
approach. Any advice?

> Can not find login module class for IBM due to hard codes
> -
>
> Key: HADOOP-15765
> URL: https://issues.apache.org/jira/browse/HADOOP-15765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Jianfei Jiang
>Priority: Major
> Attachments: HADOOP-15765_000.patch
>
>
> As the differences between various versions of IBM, the login module class is 
> sometimes different. However, the class for specified jdk (no matter the 
> version) is hard coded in Hadoop code. We have faced the error like following:
> *javax.security.auth.login.LoginException: unable to find LoginModule class: 
> com.ibm.security.auth.module.LinuxLoginModule*
>  
> Should we set the value as a config which can be set by users?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15765) Can not find login module class for IBM due to hard codes

2018-09-17 Thread Jianfei Jiang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618425#comment-16618425
 ] 

Jianfei Jiang edited comment on HADOOP-15765 at 9/18/18 3:31 AM:
-

I add a patch, it just make the value of os.login.module.name and 
os.principal.class in class UserGroupInformation configurable.

However in class KerberosAuthenticator, it also have similiar codes and should 
change to configurable, in this class, there is no given conf and cannot load 
Configuration like UserGroupInformation do. One way to make the value in 
KerberosAuthenticator is to set system environment, I dont think it is a good 
approach. Any advice?


was (Author: jiangjianfei):
I add a patch, it just make the value of [[os.login.module.name]] and 
[[os.principal.class]] in class [[UserGroupInformation]] configurable.

 

However in class [[KerberosAuthenticator]], it also have similiar codes and 
should change to configurable, in this class, there is no given conf and cannot 
load Configuration like UserGroupInformation do. One way to make the value in 
KerberosAuthenticator is to set system environment, I dont think it is a good 
approach. Any advice?

> Can not find login module class for IBM due to hard codes
> -
>
> Key: HADOOP-15765
> URL: https://issues.apache.org/jira/browse/HADOOP-15765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Jianfei Jiang
>Priority: Major
> Attachments: HADOOP-15765_000.patch
>
>
> As the differences between various versions of IBM, the login module class is 
> sometimes different. However, the class for specified jdk (no matter the 
> version) is hard coded in Hadoop code. We have faced the error like following:
> *javax.security.auth.login.LoginException: unable to find LoginModule class: 
> com.ibm.security.auth.module.LinuxLoginModule*
>  
> Should we set the value as a config which can be set by users?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15765) Can not find login module class for IBM due to hard codes

2018-09-17 Thread Jianfei Jiang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HADOOP-15765:
---
Attachment: HADOOP-15765_000.patch

> Can not find login module class for IBM due to hard codes
> -
>
> Key: HADOOP-15765
> URL: https://issues.apache.org/jira/browse/HADOOP-15765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Jianfei Jiang
>Priority: Major
> Attachments: HADOOP-15765_000.patch
>
>
> As the differences between various versions of IBM, the login module class is 
> sometimes different. However, the class for specified jdk (no matter the 
> version) is hard coded in Hadoop code. We have faced the error like following:
> *javax.security.auth.login.LoginException: unable to find LoginModule class: 
> com.ibm.security.auth.module.LinuxLoginModule*
>  
> Should we set the value as a config which can be set by users?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15765) Can not find login module class for IBM due to hard codes

2018-09-17 Thread Jianfei Jiang (JIRA)
Jianfei Jiang created HADOOP-15765:
--

 Summary: Can not find login module class for IBM due to hard codes
 Key: HADOOP-15765
 URL: https://issues.apache.org/jira/browse/HADOOP-15765
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Jianfei Jiang


As the differences between various versions of IBM, the login module class is 
sometimes different. However, the class for specified jdk (no matter the 
version) is hard coded in Hadoop code. We have faced the error like following:

*javax.security.auth.login.LoginException: unable to find LoginModule class: 
com.ibm.security.auth.module.LinuxLoginModule*

 

Should we set the value as a config which can be set by users?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15742) Log if ipc backoff is enabled in CallQueueManager

2018-09-17 Thread Ryan Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618421#comment-16618421
 ] 

Ryan Wu commented on HADOOP-15742:
--

:D

> Log if ipc backoff is enabled in CallQueueManager
> -
>
> Key: HADOOP-15742
> URL: https://issues.apache.org/jira/browse/HADOOP-15742
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Ryan Wu
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15742.001.patch, HADOOP-15742.002.patch, 
> HADOOP-15742.003.patch
>
>
> Currently we don't log the info of ipc backoff. It will look good to print 
> this as well so that makes users know if we enable this.
> {code:java}
>   public CallQueueManager(Class> backingClass,
>   Class schedulerClass,
>   boolean clientBackOffEnabled, int maxQueueSize, String namespace,
>   Configuration conf) {
> int priorityLevels = parseNumLevels(namespace, conf);
> this.scheduler = createScheduler(schedulerClass, priorityLevels,
> namespace, conf);
> BlockingQueue bq = createCallQueueInstance(backingClass,
> priorityLevels, maxQueueSize, namespace, conf);
> this.clientBackOffEnabled = clientBackOffEnabled;
> this.putRef = new AtomicReference>(bq);
> this.takeRef = new AtomicReference>(bq);
> LOG.info("Using callQueue: " + backingClass + " queueCapacity: " +
> maxQueueSize + " scheduler: " + schedulerClass);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15742) Log if ipc backoff is enabled in CallQueueManager

2018-09-17 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-15742:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk.

Thanks [~jianliang.wu] for the contribution and also thanks additional reviews, 
:).

> Log if ipc backoff is enabled in CallQueueManager
> -
>
> Key: HADOOP-15742
> URL: https://issues.apache.org/jira/browse/HADOOP-15742
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Ryan Wu
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15742.001.patch, HADOOP-15742.002.patch, 
> HADOOP-15742.003.patch
>
>
> Currently we don't log the info of ipc backoff. It will look good to print 
> this as well so that makes users know if we enable this.
> {code:java}
>   public CallQueueManager(Class> backingClass,
>   Class schedulerClass,
>   boolean clientBackOffEnabled, int maxQueueSize, String namespace,
>   Configuration conf) {
> int priorityLevels = parseNumLevels(namespace, conf);
> this.scheduler = createScheduler(schedulerClass, priorityLevels,
> namespace, conf);
> BlockingQueue bq = createCallQueueInstance(backingClass,
> priorityLevels, maxQueueSize, namespace, conf);
> this.clientBackOffEnabled = clientBackOffEnabled;
> this.putRef = new AtomicReference>(bq);
> this.takeRef = new AtomicReference>(bq);
> LOG.info("Using callQueue: " + backingClass + " queueCapacity: " +
> maxQueueSize + " scheduler: " + schedulerClass);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15742) Log if ipc backoff is enabled in CallQueueManager

2018-09-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618419#comment-16618419
 ] 

Hudson commented on HADOOP-15742:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14986 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14986/])
HADOOP-15742. Log if ipc backoff is enabled in CallQueueManager. (yqlin: rev 
ee051ef9fec1fddb612aa1feae9fd3df7091354f)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java


> Log if ipc backoff is enabled in CallQueueManager
> -
>
> Key: HADOOP-15742
> URL: https://issues.apache.org/jira/browse/HADOOP-15742
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Ryan Wu
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15742.001.patch, HADOOP-15742.002.patch, 
> HADOOP-15742.003.patch
>
>
> Currently we don't log the info of ipc backoff. It will look good to print 
> this as well so that makes users know if we enable this.
> {code:java}
>   public CallQueueManager(Class> backingClass,
>   Class schedulerClass,
>   boolean clientBackOffEnabled, int maxQueueSize, String namespace,
>   Configuration conf) {
> int priorityLevels = parseNumLevels(namespace, conf);
> this.scheduler = createScheduler(schedulerClass, priorityLevels,
> namespace, conf);
> BlockingQueue bq = createCallQueueInstance(backingClass,
> priorityLevels, maxQueueSize, namespace, conf);
> this.clientBackOffEnabled = clientBackOffEnabled;
> this.putRef = new AtomicReference>(bq);
> this.takeRef = new AtomicReference>(bq);
> LOG.info("Using callQueue: " + backingClass + " queueCapacity: " +
> maxQueueSize + " scheduler: " + schedulerClass);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-17 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618389#comment-16618389
 ] 

Takanobu Asanuma commented on HADOOP-15741:
---

Thanks for reviewing and committing it, [~ajisakaa]!

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15741.1.patch, HADOOP-15741.2.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15304:
---
Status: Patch Available  (was: Open)

> [JDK10] Migrate from com.sun.tools.doclets to the replacement
> -
>
> Key: HADOOP-15304
> URL: https://issues.apache.org/jira/browse/HADOOP-15304
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15304.01.patch, HADOOP-15304.02.patch, 
> HADOOP-15304.03.patch
>
>
> com.sun.tools.doclets.* packages were removed in Java 10. 
> [https://bugs.openjdk.java.net/browse/JDK-8177511]
> This causes hadoop-annotations module to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Compilation failure: Compilation failure:
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15304:
---
Component/s: build

> [JDK10] Migrate from com.sun.tools.doclets to the replacement
> -
>
> Key: HADOOP-15304
> URL: https://issues.apache.org/jira/browse/HADOOP-15304
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15304.01.patch, HADOOP-15304.02.patch, 
> HADOOP-15304.03.patch
>
>
> com.sun.tools.doclets.* packages were removed in Java 10. 
> [https://bugs.openjdk.java.net/browse/JDK-8177511]
> This causes hadoop-annotations module to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Compilation failure: Compilation failure:
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618338#comment-16618338
 ] 

Akira Ajisaka commented on HADOOP-15304:


03 patch:
* Maven javadoc plugin has been updated to 3.0.1. Undo the change in the 
configuration of maven javadoc plugin.
* Move the setting related to doclet into a profile, which is not applied when 
JDK10 or upper.

> [JDK10] Migrate from com.sun.tools.doclets to the replacement
> -
>
> Key: HADOOP-15304
> URL: https://issues.apache.org/jira/browse/HADOOP-15304
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15304.01.patch, HADOOP-15304.02.patch, 
> HADOOP-15304.03.patch
>
>
> com.sun.tools.doclets.* packages were removed in Java 10. 
> [https://bugs.openjdk.java.net/browse/JDK-8177511]
> This causes hadoop-annotations module to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Compilation failure: Compilation failure:
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15304:
---
Attachment: HADOOP-15304.03.patch

> [JDK10] Migrate from com.sun.tools.doclets to the replacement
> -
>
> Key: HADOOP-15304
> URL: https://issues.apache.org/jira/browse/HADOOP-15304
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15304.01.patch, HADOOP-15304.02.patch, 
> HADOOP-15304.03.patch
>
>
> com.sun.tools.doclets.* packages were removed in Java 10. 
> [https://bugs.openjdk.java.net/browse/JDK-8177511]
> This causes hadoop-annotations module to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Compilation failure: Compilation failure:
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15741:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~tasanuma0829].

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15741.1.patch, HADOOP-15741.2.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15304:
---
Status: Open  (was: Patch Available)

> [JDK10] Migrate from com.sun.tools.doclets to the replacement
> -
>
> Key: HADOOP-15304
> URL: https://issues.apache.org/jira/browse/HADOOP-15304
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15304.01.patch, HADOOP-15304.02.patch
>
>
> com.sun.tools.doclets.* packages were removed in Java 10. 
> [https://bugs.openjdk.java.net/browse/JDK-8177511]
> This causes hadoop-annotations module to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Compilation failure: Compilation failure:
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15764) [JDK10] Migrate from sun.net.dns.ResolverConfiguration to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15764:
--

 Summary: [JDK10] Migrate from sun.net.dns.ResolverConfiguration to 
the replacement
 Key: HADOOP-15764
 URL: https://issues.apache.org/jira/browse/HADOOP-15764
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: net, util
Reporter: Akira Ajisaka


In JDK10, sun.net.dns.ResolverConfiguration is encapsulated and not accessible 
from unnamed modules. This issue is to remove the usage of 
ResolverConfiguration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15756) [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618313#comment-16618313
 ] 

Akira Ajisaka commented on HADOOP-15756:


Filed HADOOP-15764.

> [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement
> --
>
> Key: HADOOP-15756
> URL: https://issues.apache.org/jira/browse/HADOOP-15756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15756.01.patch
>
>
> In JDK10, sun.net.util.IPAddressUtil is encapsulated and not accessible from 
> unnamed modules. This issue is to remove the usage of IPAddressUtil.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15756) [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement

2018-09-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618308#comment-16618308
 ] 

Akira Ajisaka commented on HADOOP-15756:


Yes, I'd like to remove sun.net.dns.ResolverConfiguration as well. I'll file a 
jira shortly.

> [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement
> --
>
> Key: HADOOP-15756
> URL: https://issues.apache.org/jira/browse/HADOOP-15756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15756.01.patch
>
>
> In JDK10, sun.net.util.IPAddressUtil is encapsulated and not accessible from 
> unnamed modules. This issue is to remove the usage of IPAddressUtil.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15744) AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch

2018-09-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618298#comment-16618298
 ] 

Hadoop QA commented on HADOOP-15744:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
35s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
31s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 17s{color} | {color:orange} root: The patch generated 5 new + 34 unchanged - 
1 fixed = 39 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 15s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  9s{color} 
| {color:red} hadoop-azure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15744 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940085/HADOOP-15744-HADOOP-15407-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cb2bb8853134 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15407 / b4c2304 |
| maven | version: Apache Maven 3.

[jira] [Comment Edited] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618268#comment-16618268
 ] 

Thomas Marquardt edited comment on HADOOP-15719 at 9/17/18 11:18 PM:
-

*TestOauthFailOverHttp.java*: The import order for the org.apache.* classes 
should be as follows:

  import org.apache.hadoop.conf.Configuration;
  import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
  import org.apache.hadoop.fs.FileSystem;
  import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
  import org.apache.hadoop.fs.azurebfs.constants.FileSystemUriSchemes;

 

*TestOauthFailOverHttp.java*: The test case testOauthFailWithSchemeAbfs passes 
even without
  your fix.  You probably meant to remove the try/catch and use 
  @Test(expected = IllegalArgumentException.class).


was (Author: tmarquardt):
# TestOauthFailOverHttp.java: The import order for the org.apache.* classes 
should be as follows:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
import org.apache.hadoop.fs.azurebfs.constants.FileSystemUriSchemes;



 # TestOauthFailOverHttp.java: The test case testOauthFailWithSchemeAbfs passes 
even without your fix.  You probably meant to remove the try/catch and use  
@Test(expected = IllegalArgumentException.class).

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch, HADOOP-15719-HADOOP-15407-003.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618268#comment-16618268
 ] 

Thomas Marquardt commented on HADOOP-15719:
---

# TestOauthFailOverHttp.java: The import order for the org.apache.* classes 
should be as follows:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
import org.apache.hadoop.fs.azurebfs.constants.FileSystemUriSchemes;



 # TestOauthFailOverHttp.java: The test case testOauthFailWithSchemeAbfs passes 
even without your fix.  You probably meant to remove the try/catch and use  
@Test(expected = IllegalArgumentException.class).

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch, HADOOP-15719-HADOOP-15407-003.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15711) Fix branch-2 builds

2018-09-17 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618260#comment-16618260
 ] 

Jonathan Hung commented on HADOOP-15711:


[~asuresh]/[~xkrogen]/[~shv] helped me investigate, basically the last email we 
got from hadoop-qbt-branch2-java7-linux-x86  which actually ran unit tests was 
on Feb 26. I see this was committed on Feb 26 to branch-2/branch-2.9 as well: 
{noformat}commit 762125b864ab812512bad9a59344ca79af7f43ac
Author: Chris Douglas 
Date:   Mon Feb 26 16:32:06 2018 -0800

Backport HADOOP-13514 (surefire upgrade) to branch-2{noformat}

I see this was committed to branch-2.8 as well but eventually reverted.

So I am wondering if we can try a test run with this patch reverted so we can 
see the results. [~aw] thoughts on this? Do you know if reverting this will 
cause issues on the jenkins infra?

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15744) AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch

2018-09-17 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618250#comment-16618250
 ] 

Thomas Marquardt commented on HADOOP-15744:
---

+1, looks good to me.  I agree with your comments above.

> AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch
> 
>
> Key: HADOOP-15744
> URL: https://issues.apache.org/jira/browse/HADOOP-15744
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HADOOP-15407
>Reporter: Andras Bokor
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15744-HADOOP-15407-001.patch
>
>
> {code:java}
> mvn test 
> -Dtest=TestHDFSContractAppend#testAppendDirectory,TestRouterWebHDFSContractAppend#testAppendDirectory{code}
> In case of TestHDFSContractAppend the test excepts FileAlreadyExistsException 
> but HDFS sends the exception wrapped into a RemoteException.
> In case of TestRouterWebHDFSContractAppend the append does not even throw 
> exception.
> [~ste...@apache.org], [~tmarquardt], any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618243#comment-16618243
 ] 

Hadoop QA commented on HADOOP-15719:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
37s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15719 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940080/HADOOP-15719-HADOOP-15407-003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e5fe0c59a3a9 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15407 / 8873d29 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15209/testReport/ |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15209/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fail-fast when using OAuth over http
> 
>
>

[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-09-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618227#comment-16618227
 ] 

Steve Loughran commented on HADOOP-15407:
-

bq. I have not seen the HADOOP-15761  failure, but I'm fine with updating ABFS 
to not use regex, or whatever it takes to make it robust.  Someone who is able 
to reproduce the failure should fix it. Maybe I don't see it because I don't 
have OpenSSL?

I'm not worried about it; I saw it on an IDE run of all the abfs tests. I don't 
see it on standalone test runs, which makes me think its some state preserved 
from a previous test.

I've just created HADOOP-15763 for all those followup issues. I think 
HADOOP-15761 is in that category.

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Da Zhou
>Priority: Blocker
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, HADOOP-15407-008.patch, 
> HADOOP-15407-HADOOP-15407-008.patch, HADOOP-15407-HADOOP-15407.006.patch, 
> HADOOP-15407-HADOOP-15407.007.patch, HADOOP-15407-HADOOP-15407.008.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and 

[jira] [Commented] (HADOOP-15723) ABFS: Ranger Support

2018-09-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618228#comment-16618228
 ] 

Hadoop QA commented on HADOOP-15723:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
44s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 2 
new + 7 unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15723 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940070/HADOOP-15273-HADOOP-15407-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a48d2250c4cb 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15407 / 8873d29 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15208/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15208/testReport/ |
| Max. process+thread count | 303 (vs. ulimit of 1) |
| modules | C: h

[jira] [Created] (HADOOP-15763) Über-JIRA: abfs phase II: Hadoop 3.3 features & fixes

2018-09-17 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15763:
---

 Summary: Über-JIRA: abfs phase II: Hadoop 3.3 features & fixes
 Key: HADOOP-15763
 URL: https://issues.apache.org/jira/browse/HADOOP-15763
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Steve Loughran


ABFS phase II: address issues which surface in the field; tune things which 
need tuning, add more tests where appropriate. Improve docs, especially 
troubleshooting. Classpaths. The usual.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15754) s3guard: testDynamoTableTagging should clear existing config

2018-09-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618214#comment-16618214
 ] 

Hudson commented on HADOOP-15754:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14982 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14982/])
HADOOP-15754. s3guard: testDynamoTableTagging should clear existing (stevel: 
rev 26d0c63a1e2eea6558fca2c55c134c02ecc93bf8)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java


> s3guard: testDynamoTableTagging should clear existing config
> 
>
> Key: HADOOP-15754
> URL: https://issues.apache.org/jira/browse/HADOOP-15754
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15754.001.patch, HADOOP-15754.002.patch
>
>
> I recently committed HADOOP-14734 which adds support for tagging Dynamo DB 
> tables for S3Guard when they are created.
>  
> Later, when testing another patch, I hit a test failure because I still had a 
> tag option set in my test configuration (auth-keys.xml) that was adding my 
> own table tag.
> {noformat}
> [ERROR] 
> testDynamoTableTagging(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 13.384 s  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<3>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:743)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at org.junit.Assert.assertEquals(Assert.java:555)
>         at org.junit.Assert.assertEquals(Assert.java:542)
>         at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoTableTagging(ITestS3GuardToolDynamoDB.java:129)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>         at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){noformat}
> I think the solution is just to clear any tag.* options set in the 
> configuration at the beginning of the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15744) AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch

2018-09-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15744:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15407

> AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch
> 
>
> Key: HADOOP-15744
> URL: https://issues.apache.org/jira/browse/HADOOP-15744
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HADOOP-15407
>Reporter: Andras Bokor
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15744-HADOOP-15407-001.patch
>
>
> {code:java}
> mvn test 
> -Dtest=TestHDFSContractAppend#testAppendDirectory,TestRouterWebHDFSContractAppend#testAppendDirectory{code}
> In case of TestHDFSContractAppend the test excepts FileAlreadyExistsException 
> but HDFS sends the exception wrapped into a RemoteException.
> In case of TestRouterWebHDFSContractAppend the append does not even throw 
> exception.
> [~ste...@apache.org], [~tmarquardt], any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15744) AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch

2018-09-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618211#comment-16618211
 ] 

Steve Loughran commented on HADOOP-15744:
-

testing: abfs amsterdam

> AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch
> 
>
> Key: HADOOP-15744
> URL: https://issues.apache.org/jira/browse/HADOOP-15744
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: HADOOP-15407
>Reporter: Andras Bokor
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15744-HADOOP-15407-001.patch
>
>
> {code:java}
> mvn test 
> -Dtest=TestHDFSContractAppend#testAppendDirectory,TestRouterWebHDFSContractAppend#testAppendDirectory{code}
> In case of TestHDFSContractAppend the test excepts FileAlreadyExistsException 
> but HDFS sends the exception wrapped into a RemoteException.
> In case of TestRouterWebHDFSContractAppend the append does not even throw 
> exception.
> [~ste...@apache.org], [~tmarquardt], any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15744) AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch

2018-09-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15744:

Target Version/s: 3.2.0
  Status: Patch Available  (was: Open)

patch 001; reverts all changes to the append tests

> AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch
> 
>
> Key: HADOOP-15744
> URL: https://issues.apache.org/jira/browse/HADOOP-15744
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: HADOOP-15407
>Reporter: Andras Bokor
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15744-HADOOP-15407-001.patch
>
>
> {code:java}
> mvn test 
> -Dtest=TestHDFSContractAppend#testAppendDirectory,TestRouterWebHDFSContractAppend#testAppendDirectory{code}
> In case of TestHDFSContractAppend the test excepts FileAlreadyExistsException 
> but HDFS sends the exception wrapped into a RemoteException.
> In case of TestRouterWebHDFSContractAppend the append does not even throw 
> exception.
> [~ste...@apache.org], [~tmarquardt], any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15744) AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch

2018-09-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15744:

Attachment: HADOOP-15744-HADOOP-15407-001.patch

> AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch
> 
>
> Key: HADOOP-15744
> URL: https://issues.apache.org/jira/browse/HADOOP-15744
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: HADOOP-15407
>Reporter: Andras Bokor
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15744-HADOOP-15407-001.patch
>
>
> {code:java}
> mvn test 
> -Dtest=TestHDFSContractAppend#testAppendDirectory,TestRouterWebHDFSContractAppend#testAppendDirectory{code}
> In case of TestHDFSContractAppend the test excepts FileAlreadyExistsException 
> but HDFS sends the exception wrapped into a RemoteException.
> In case of TestRouterWebHDFSContractAppend the append does not even throw 
> exception.
> [~ste...@apache.org], [~tmarquardt], any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-09-17 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618204#comment-16618204
 ] 

Thomas Marquardt commented on HADOOP-15407:
---

I was able to successfully rebase on trunk.  The test results look good for 
hadoop-azure and hadoop-common.  I force pushed the changes to the HADOOP-15407 
branch.

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Da Zhou
>Priority: Blocker
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, HADOOP-15407-008.patch, 
> HADOOP-15407-HADOOP-15407-008.patch, HADOOP-15407-HADOOP-15407.006.patch, 
> HADOOP-15407-HADOOP-15407.007.patch, HADOOP-15407-HADOOP-15407.008.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15758) Filesystem.get(URI, Configuration, user) API not working with proxy users

2018-09-17 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618199#comment-16618199
 ] 

Hrishikesh Gadre commented on HADOOP-15758:
---

HADOOP-12953 is to add proxy users support for FileSystem API. Linking it with 
this Jira since they are related.

> Filesystem.get(URI, Configuration, user) API not working with proxy users
> -
>
> Key: HADOOP-15758
> URL: https://issues.apache.org/jira/browse/HADOOP-15758
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Major
> Attachments: HADOOP-15758-001.patch
>
>
> A user reported that the Filesystem.get API is not working as expected when 
> they use the 'FileSystem.get(URI, Configuration, user)' method signature - 
> but 'FileSystem.get(URI, Configuration)' works fine. The user is trying to 
> use this method signature to mimic proxy user functionality e.g. provide 
> ticket cache based kerberos credentials (using KRB5CCNAME env variable) for 
> the proxy user and then in the java program pass name of the user to be 
> impersonated. The alternative, to use [proxy users 
> functionality|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html]
>  in Hadoop works as expected.
>  
> Since FileSystem.get(URI, Configuration, user) is a public API and it does 
> not restrict its usage in this fashion, we should ideally make it work or add 
> docs to discourage its usage to implement proxy users.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15744) AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch

2018-09-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618198#comment-16618198
 ] 

Steve Loughran commented on HADOOP-15744:
-

Created HADOOP-15762 to deal with those contract tests elsewhere; pulling them 
out of HADOP-14507 because even though ABFS is happy, hdfs clearly isn't 
rejecting the requests consistently. A side project for another day

> AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch
> 
>
> Key: HADOOP-15744
> URL: https://issues.apache.org/jira/browse/HADOOP-15744
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: HADOOP-15407
>Reporter: Andras Bokor
>Assignee: Steve Loughran
>Priority: Minor
>
> {code:java}
> mvn test 
> -Dtest=TestHDFSContractAppend#testAppendDirectory,TestRouterWebHDFSContractAppend#testAppendDirectory{code}
> In case of TestHDFSContractAppend the test excepts FileAlreadyExistsException 
> but HDFS sends the exception wrapped into a RemoteException.
> In case of TestRouterWebHDFSContractAppend the append does not even throw 
> exception.
> [~ste...@apache.org], [~tmarquardt], any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15744) AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch

2018-09-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15744:
---

Assignee: Steve Loughran  (was: Andras Bokor)

> AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch
> 
>
> Key: HADOOP-15744
> URL: https://issues.apache.org/jira/browse/HADOOP-15744
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: HADOOP-15407
>Reporter: Andras Bokor
>Assignee: Steve Loughran
>Priority: Minor
>
> {code:java}
> mvn test 
> -Dtest=TestHDFSContractAppend#testAppendDirectory,TestRouterWebHDFSContractAppend#testAppendDirectory{code}
> In case of TestHDFSContractAppend the test excepts FileAlreadyExistsException 
> but HDFS sends the exception wrapped into a RemoteException.
> In case of TestRouterWebHDFSContractAppend the append does not even throw 
> exception.
> [~ste...@apache.org], [~tmarquardt], any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15762) AbstractContractAppendTest to add more tests, implementations to comply

2018-09-17 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15762:
---

 Summary: AbstractContractAppendTest to add more tests, 
implementations to comply
 Key: HADOOP-15762
 URL: https://issues.apache.org/jira/browse/HADOOP-15762
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran


There are some extra append tests from HADOOP-14507; put them in 
{{AbstractContractAppendTest}} and make sure that the filesystems are all 
compliant

specifically
# can you append over a directory
# can you append to a file  which you have just deleted 

how directory appends are rejected seems to vary



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14833) Remove s3a user:secret authentication

2018-09-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14833:

Parent Issue: HADOOP-15220  (was: HADOOP-15620)

> Remove s3a user:secret authentication
> -
>
> Key: HADOOP-14833
> URL: https://issues.apache.org/jira/browse/HADOOP-14833
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, security
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14833-001.patch, HADOOP-14833-002.patch
>
>
> Remove the s3a://user:secret@host auth mechanism from S3a. 
> As well as being insecure, it causes problems with S3Guard's URI matching 
> code.
> Proposed: cull it utterly. We've been telling people to stop using it since 
> HADOOP-3733



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15754) s3guard: testDynamoTableTagging should clear existing config

2018-09-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15754:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

> s3guard: testDynamoTableTagging should clear existing config
> 
>
> Key: HADOOP-15754
> URL: https://issues.apache.org/jira/browse/HADOOP-15754
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15754.001.patch, HADOOP-15754.002.patch
>
>
> I recently committed HADOOP-14734 which adds support for tagging Dynamo DB 
> tables for S3Guard when they are created.
>  
> Later, when testing another patch, I hit a test failure because I still had a 
> tag option set in my test configuration (auth-keys.xml) that was adding my 
> own table tag.
> {noformat}
> [ERROR] 
> testDynamoTableTagging(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 13.384 s  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<3>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:743)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at org.junit.Assert.assertEquals(Assert.java:555)
>         at org.junit.Assert.assertEquals(Assert.java:542)
>         at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoTableTagging(ITestS3GuardToolDynamoDB.java:129)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>         at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){noformat}
> I think the solution is just to clear any tag.* options set in the 
> configuration at the beginning of the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15226) Über-JIRA: S3Guard Phase III: Hadoop 3.2 features

2018-09-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15226.
-
   Resolution: Fixed
Fix Version/s: 3.2.0

Everything is done! Thank you all for your work here!

> Über-JIRA: S3Guard Phase III: Hadoop 3.2 features
> -
>
> Key: HADOOP-15226
> URL: https://issues.apache.org/jira/browse/HADOOP-15226
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
>
> S3Guard features/improvements/fixes for Hadoop 3.2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15220) Über-jira: S3a phase V: Hadoop 3.2 features

2018-09-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15220.
-
   Resolution: Fixed
Fix Version/s: 3.2.0

> Über-jira: S3a phase V: Hadoop 3.2 features
> ---
>
> Key: HADOOP-15220
> URL: https://issues.apache.org/jira/browse/HADOOP-15220
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
>
> Über-jira for S3A work for Hadoop 3.2.x
> The items from HADOOP-14831 which didn't get into Hadoop-3.1, and anything 
> else



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15754) s3guard: testDynamoTableTagging should clear existing config

2018-09-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618182#comment-16618182
 ] 

Steve Loughran commented on HADOOP-15754:
-

looking at my {{ITestDynamoDBMetadataStore}} tests, looks like they skip if  
"fs.s3a.s3guard.ddb.test.table" isn't set (which I clearly don't). So if you 
set that property, yes, you need the region. For best test coverage then: set 
the region, set that test table property.

running with the latest patch, I get
{code}
[INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
[ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 
362.457 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
[ERROR] 
testSetCapacityFailFastIfNotGuarded(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
  Time elapsed: 1.668 s  <<< ERROR!
java.io.FileNotFoundException: DynamoDB table 
'c3161b52-1623-469b-8a76-1f4a6d0186e0' does not exist in region eu-west-1; 
auto-creation is turned off
Caused by: com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: 
Requested resource not found: Table: c3161b52-1623-469b-8a76-1f4a6d0186e0 not 
found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ResourceNotFoundException; Request ID: 
RVI7QSR6VFFLENT2O767VJSVERVV4KQNSO5AEMVJF66Q9ASUAAJG)

[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   
ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testSetCapacityFailFastIfNotGuarded:330->AbstractS3GuardToolTestBase.lambda$testSetCapacityFailFastIfNotGuarded$2:331->AbstractS3GuardToolTestBase.run:115
 » FileNotFound
[INFO] 
[ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 1
[INFO] 
{code}

That test failure is unrelated: it's me having table autocreate off. separate 
issue

+1 to this. committing to 3.2
thanks!

> s3guard: testDynamoTableTagging should clear existing config
> 
>
> Key: HADOOP-15754
> URL: https://issues.apache.org/jira/browse/HADOOP-15754
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15754.001.patch, HADOOP-15754.002.patch
>
>
> I recently committed HADOOP-14734 which adds support for tagging Dynamo DB 
> tables for S3Guard when they are created.
>  
> Later, when testing another patch, I hit a test failure because I still had a 
> tag option set in my test configuration (auth-keys.xml) that was adding my 
> own table tag.
> {noformat}
> [ERROR] 
> testDynamoTableTagging(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 13.384 s  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<3>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:743)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at org.junit.Assert.assertEquals(Assert.java:555)
>         at org.junit.Assert.assertEquals(Assert.java:542)
>         at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoTableTagging(ITestS3GuardToolDynamoDB.java:129)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>         at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){noformat}
> I think the solution is just to clear any tag.* options set in the 
> configuration at the beginning of the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618171#comment-16618171
 ] 

Da Zhou commented on HADOOP-15719:
--

Submitting HADOOP-15719-HADOOP-15407-003.patch:
- fixed checkstyle violation
- updated test to explicitly expect  IllegalArgumentException
- updated import order.

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch, HADOOP-15719-HADOOP-15407-003.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Attachment: HADOOP-15719-HADOOP-15407-003.patch

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch, HADOOP-15719-HADOOP-15407-003.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Attachment: (was: HADOOP-15719-HADOOP-15407-003.patch)

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Attachment: HADOOP-15719-HADOOP-15407-003.patch

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch, HADOOP-15719-HADOOP-15407-003.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Attachment: (was: HADOOP-15719-HADOOP-15407-003.patch)

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Attachment: HADOOP-15719-HADOOP-15407-003.patch

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch, HADOOP-15719-HADOOP-15407-003.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618152#comment-16618152
 ] 

Sean Mackrory edited comment on HADOOP-15719 at 9/17/18 9:05 PM:
-

In addition to fixing the checkstyle issues, I think we should also explicitly 
fail if that exception isn't thrown. If something breaks in the future where 
this fails silently, we should still fail. +1 otherwise.


was (Author: mackrorysd):
In addition to fixing the checkstyle issues, I think we should also explicitly 
fail if that exception isn't thrown. If something breaks in the future where 
this fails silently, we should still fail.

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618152#comment-16618152
 ] 

Sean Mackrory commented on HADOOP-15719:


In addition to fixing the checkstyle issues, I think we should also explicitly 
fail if that exception isn't thrown. If something breaks in the future where 
this fails silently, we should still fail.

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15009) hadoop-resourceestimator's shell scripts are a mess

2018-09-17 Thread Subru Krishnan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618151#comment-16618151
 ] 

Subru Krishnan commented on HADOOP-15009:
-

[~sunilg] when I tried to check the patch, I realized I like the existing 
script better (semantically) as the Resource Estimator is independent of YARN. 

Ideally we should have a start/stop daemon for Tools like we have for YARN/HDFS 
and then move all tools like Resource Estimator, SLS to it. For now, I have set 
the priority to Major.

> hadoop-resourceestimator's shell scripts are a mess
> ---
>
> Key: HADOOP-15009
> URL: https://issues.apache.org/jira/browse/HADOOP-15009
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts, tools
>Affects Versions: 3.1.0
>Reporter: Allen Wittenauer
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15009.001.patch, Screen Shot 2017-12-12 at 
> 11.16.23 AM.png
>
>
> #1:
> There's no reason for estimator.sh to exist.  Just make it a subcommand under 
> yarn or whatever.  
> #2:
> In it's current form, it's missing a BUNCH of boilerplate that makes certain 
> functionality completely fail.
> #3
> start/stop-estimator.sh is full of copypasta that doesn't actually do 
> anything/work correctly.  Additionally, if estimator.sh doesn't exist, 
> neither does this since yarn --daemon start/stop will do everything as 
> necessary.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618147#comment-16618147
 ] 

Hadoop QA commented on HADOOP-15719:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
57s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 2 
new + 7 unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15719 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940055/HADOOP-15719-HADOOP-15407-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 29e7823bb4b3 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15407 / 8873d29 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15207/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15207/testReport/ |
| Max. process+thread count | 330 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Bui

[jira] [Updated] (HADOOP-15009) hadoop-resourceestimator's shell scripts are a mess

2018-09-17 Thread Subru Krishnan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated HADOOP-15009:

Priority: Major  (was: Blocker)

> hadoop-resourceestimator's shell scripts are a mess
> ---
>
> Key: HADOOP-15009
> URL: https://issues.apache.org/jira/browse/HADOOP-15009
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts, tools
>Affects Versions: 3.1.0
>Reporter: Allen Wittenauer
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15009.001.patch, Screen Shot 2017-12-12 at 
> 11.16.23 AM.png
>
>
> #1:
> There's no reason for estimator.sh to exist.  Just make it a subcommand under 
> yarn or whatever.  
> #2:
> In it's current form, it's missing a BUNCH of boilerplate that makes certain 
> functionality completely fail.
> #3
> start/stop-estimator.sh is full of copypasta that doesn't actually do 
> anything/work correctly.  Additionally, if estimator.sh doesn't exist, 
> neither does this since yarn --daemon start/stop will do everything as 
> necessary.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15010) hadoop-resourceestimator's assembly buries it

2018-09-17 Thread Subru Krishnan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated HADOOP-15010:

Priority: Major  (was: Blocker)

> hadoop-resourceestimator's assembly buries it
> -
>
> Key: HADOOP-15010
> URL: https://issues.apache.org/jira/browse/HADOOP-15010
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, tools
>Affects Versions: 2.9.0, 3.1.0
>Reporter: Allen Wittenauer
>Priority: Major
>
> There's zero reason for this layout:
> {code}
> hadoop-3.1.0-SNAPSHOT/share/hadoop/tools/resourceestimator
>  - bin
>  - conf
>  - data
> {code}
> Buried that far back, it might as well not exist.
> Propose:
> a) HADOOP-15009 to eliminate bin
> b) Move conf file into etc/hadoop
> c) keep data where it's at



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15723) ABFS: Ranger Support

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15723:
-
Attachment: HADOOP-15273-HADOOP-15407-001.patch
Status: Patch Available  (was: Open)

Submitting HADOOP-15273-HADOOP-15407-001.patch for review on behalf of 
[~kowon2008]

> ABFS: Ranger Support
> 
>
> Key: HADOOP-15723
> URL: https://issues.apache.org/jira/browse/HADOOP-15723
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15273-HADOOP-15407-001.patch
>
>
> Add support for Ranger



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14833) Remove s3a user:secret authentication

2018-09-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14833:

Release Note: 
The S3A connector no longer supports username and secrets in URLs of the form 
`s3a://key:secret@bucket/`. It is near-impossible to stop those secrets being 
logged —which is why a warning has been printed since Hadoop 2.8 whenever such 
a URL was used.

Fix: use a more secure mechanism to pass down the secrets.

  was:
After this patch, the S3A connector no longer supports username and secrets in 
URLs of the form `s3a://key:secret@bucket/`. It is near-impossible to stop 
those secrets being logged —which is why a warning has been printed since 
Hadoop 2.8 whenever such a URL was used.

Fix: use a more secure mechanism to pass down the secrets.


> Remove s3a user:secret authentication
> -
>
> Key: HADOOP-14833
> URL: https://issues.apache.org/jira/browse/HADOOP-14833
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, security
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14833-001.patch, HADOOP-14833-002.patch
>
>
> Remove the s3a://user:secret@host auth mechanism from S3a. 
> As well as being insecure, it causes problems with S3Guard's URI matching 
> code.
> Proposed: cull it utterly. We've been telling people to stop using it since 
> HADOOP-3733



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14833) Remove s3a user:secret authentication

2018-09-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14833:

Fix Version/s: (was: 3.3.0)
   3.2.0

> Remove s3a user:secret authentication
> -
>
> Key: HADOOP-14833
> URL: https://issues.apache.org/jira/browse/HADOOP-14833
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, security
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14833-001.patch, HADOOP-14833-002.patch
>
>
> Remove the s3a://user:secret@host auth mechanism from S3a. 
> As well as being insecure, it causes problems with S3Guard's URI matching 
> code.
> Proposed: cull it utterly. We've been telling people to stop using it since 
> HADOOP-3733



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-09-17 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618111#comment-16618111
 ] 

Thomas Marquardt commented on HADOOP-15407:
---

Ok, it is good to know that the rebase was a success for you.  I am still 
running tests, but so far, so good.  I will force push if all the tests pass 
and infra allows me to do so. 

I have not seen the HADOOP-15761 failure, but I'm fine with updating ABFS to 
not use regex, or whatever it takes to make it robust.  Someone who is able to 
reproduce the failure should fix it. Maybe I don't see it because I don't have 
OpenSSL?

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Da Zhou
>Priority: Blocker
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, HADOOP-15407-008.patch, 
> HADOOP-15407-HADOOP-15407-008.patch, HADOOP-15407-HADOOP-15407.006.patch, 
> HADOOP-15407-HADOOP-15407.007.patch, HADOOP-15407-HADOOP-15407.008.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-09-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618086#comment-16618086
 ] 

Steve Loughran commented on HADOOP-15407:
-

I've just done a rebase & retest locally, only one issue w.r.t abfs testing: 
HADOOP-15761, and that's nothing serious.

I don't know if you'll be able to force push the rebased branch up as infra 
like to lock that down to stop anyone forcing up some rollback of branches.

Try it —If  it doesn't take, file a JIRA on the INFRA  project asking the 
branch to support it.

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Da Zhou
>Priority: Blocker
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, HADOOP-15407-008.patch, 
> HADOOP-15407-HADOOP-15407-008.patch, HADOOP-15407-HADOOP-15407.006.patch, 
> HADOOP-15407-HADOOP-15407.007.patch, HADOOP-15407-HADOOP-15407.008.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HADOOP-15761) intermittent failure of TestAbfsClient.validateUserAgent

2018-09-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618077#comment-16618077
 ] 

Steve Loughran commented on HADOOP-15761:
-

Stack
{code}
java.lang.AssertionError: User agent Azure Blob FS/1.0 (JavaJRE 1.8.0_121; 
MacOSX 10.13.6; openssl-1.0) Partner Service does not match regexp Azure Blob 
FS\/1.0 \(JavaJRE ([^\)]+) SunJSSE-1.8\) Partner Service

at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.fs.azurebfs.services.TestAbfsClient.validateUserAgent(TestAbfsClient.java:52)
at 
org.apache.hadoop.fs.azurebfs.services.TestAbfsClient.verifyUserAgentWithSSLProvider(TestAbfsClient.java:86)
{code}

This test works standalone, it's only on seqential test runs in the same JVM as 
other tests which are failing. Likely cause: some singleton has been preinited 
with the openssl/wildfly provider

> intermittent failure of TestAbfsClient.validateUserAgent
> 
>
> Key: HADOOP-15761
> URL: https://issues.apache.org/jira/browse/HADOOP-15761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Priority: Minor
>
> (seemingly intermittent) failure of the pattern matcher in 
> {{TestAbfsClient.validateUserAgent}}
> {code}
> java.lang.AssertionError: User agent Azure Blob FS/1.0 (JavaJRE 1.8.0_121; 
> MacOSX 10.13.6; openssl-1.0) Partner Service does not match regexp Azure Blob 
> FS\/1.0 \(JavaJRE ([^\)]+) SunJSSE-1.8\) Partner Service
> {code}
> Using a regexp is probably too brittle here: safest just to look for some 
> specific substring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15761) intermittent failure of TestAbfsClient.validateUserAgent

2018-09-17 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15761:
---

 Summary: intermittent failure of TestAbfsClient.validateUserAgent
 Key: HADOOP-15761
 URL: https://issues.apache.org/jira/browse/HADOOP-15761
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure, test
Affects Versions: HADOOP-15407
Reporter: Steve Loughran


(seemingly intermittent) failure of the pattern matcher in 
{{TestAbfsClient.validateUserAgent}}
{code}
java.lang.AssertionError: User agent Azure Blob FS/1.0 (JavaJRE 1.8.0_121; 
MacOSX 10.13.6; openssl-1.0) Partner Service does not match regexp Azure Blob 
FS\/1.0 \(JavaJRE ([^\)]+) SunJSSE-1.8\) Partner Service
{code}
Using a regexp is probably too brittle here: safest just to look for some 
specific substring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-09-17 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618070#comment-16618070
 ] 

Thomas Marquardt commented on HADOOP-15407:
---

I am going to rebase branch HADOOP-15407 on the latest trunk today.

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Da Zhou
>Priority: Blocker
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, HADOOP-15407-008.patch, 
> HADOOP-15407-HADOOP-15407-008.patch, HADOOP-15407-HADOOP-15407.006.patch, 
> HADOOP-15407-HADOOP-15407.007.patch, HADOOP-15407-HADOOP-15407.008.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Attachment: HADOOP-15719-HADOOP-15407-002.patch

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch, 
> HADOOP-15719-HADOOP-15407-002.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Attachment: HADOOP-15719-HADOOP-15407-001.patch

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Attachment: (was: HADOOP-15719-HADOOP-15407-001.patch)

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Status: Patch Available  (was: Open)

Submitting HADOOP-15719-HADOOP-15407-001.patch.
- Added scheme verification when authenticating using Oauth.
- Verification happened during FS initialization to avoid unnecessary HTTP 
request.
- Unit test added.

Tests passed using my US WEST azure account:
 Tests run: 36, Failures: 0, Errors: 0, Skipped: 0
 Tests run: 269, Failures: 0, Errors: 0, Skipped: 30
 Tests run: 167, Failures: 0, Errors: 0, Skipped: 27
  
 


> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15760) Include Apache Commons Collections4

2018-09-17 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15760:
-
Description: Please allow for use of Apache Commons Collections 4 library 
with the end goal of migrating from Apache Commons Collections 3.  (was: Please 
allow for use of Apache Commons Collections 4 library with the end goal of 
migrating from Commons Collects 3.)

> Include Apache Commons Collections4
> ---
>
> Key: HADOOP-15760
> URL: https://issues.apache.org/jira/browse/HADOOP-15760
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0, 3.0.3
>Reporter: BELUGA BEHR
>Priority: Major
> Attachments: HADOOP-15760.1.patch
>
>
> Please allow for use of Apache Commons Collections 4 library with the end 
> goal of migrating from Apache Commons Collections 3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15719) Fail-fast when using OAuth over http

2018-09-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15719:
-
Attachment: HADOOP-15719-HADOOP-15407-001.patch

> Fail-fast when using OAuth over http
> 
>
> Key: HADOOP-15719
> URL: https://issues.apache.org/jira/browse/HADOOP-15719
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15719-HADOOP-15407-001.patch
>
>
> If you configure OAuth and then use abfs:// instead of abfss:// it will fail, 
> but it takes a very long time, and still isn't very clear why. Good place to 
> put a good human-readable exception message and fail fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11423) [Umbrella] Fix Java 10 incompatibilities in Hadoop

2018-09-17 Thread Lars Francke (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617999#comment-16617999
 ] 

Lars Francke commented on HADOOP-11423:
---

I couldn't find any "official" lifecycle policy for OpenJDK but both Oracle and 
RedHat will maintain a LTS release of Java 11. So I still believe it makes 
sense to target that one as my guess would be that lots of users will jump from 
8 to 11.

Oracle for the Oracle JDK and RedHat for OpenJDK: 
[https://access.redhat.com/articles/3409141] 
[https://en.wikipedia.org/wiki/Java_version_history]

> [Umbrella] Fix Java 10 incompatibilities in Hadoop
> --
>
> Key: HADOOP-11423
> URL: https://issues.apache.org/jira/browse/HADOOP-11423
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: sneaky
>Priority: Major
>
> Java 10 is coming quickly to various clusters. Making sure Hadoop seamlessly 
> works with Java 10 is important for the Apache community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15726) Create utility to limit frequency of log statements

2018-09-17 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617998#comment-16617998
 ] 

Chao Sun commented on HADOOP-15726:
---

Regarding 2), yes I also feel a little strange about {{log}} doesn't log 
anything. About renaming, I'm fine with either {{store}} or {{update}}. Also 
personally I suggest {{record}} as well. :)

> Create utility to limit frequency of log statements
> ---
>
> Key: HADOOP-15726
> URL: https://issues.apache.org/jira/browse/HADOOP-15726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, util
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-15726.000.patch, HADOOP-15726.001.patch
>
>
> There is a common pattern of logging a behavior that is normally extraneous. 
> Under some circumstances, such a behavior becomes common, flooding the logs 
> and making it difficult to see what else is going on in the system. Under 
> such situations it is beneficial to limit how frequently the extraneous 
> behavior is logged, while capturing some summary information about the 
> suppressed log statements.
> This is currently implemented in {{FSNamesystemLock}} (in HDFS-10713). We 
> have additional use cases for this in HDFS-13791, so this is a good time to 
> create a common utility for different sites to share this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15758) Filesystem.get(URI, Configuration, user) API not working with proxy users

2018-09-17 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617983#comment-16617983
 ] 

Eric Yang commented on HADOOP-15758:


[~hgadre] {quote}(b) if a ticket cache path is not specified and user name is 
provided, it creates a remote user {quote}

Ticket cache must be verified prior to create a remote user.  Without a 
validate ticket, Java code should not have access to create a remote user.  
Proxy user check must be in place on server side to prevent security hole.

{quote}application provide the user name as well as the ticket cache path. The 
question is should it treat this as a proxy user scenario?{quote}

This seem like valid use case that spark and hive would depend on.

> Filesystem.get(URI, Configuration, user) API not working with proxy users
> -
>
> Key: HADOOP-15758
> URL: https://issues.apache.org/jira/browse/HADOOP-15758
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Major
> Attachments: HADOOP-15758-001.patch
>
>
> A user reported that the Filesystem.get API is not working as expected when 
> they use the 'FileSystem.get(URI, Configuration, user)' method signature - 
> but 'FileSystem.get(URI, Configuration)' works fine. The user is trying to 
> use this method signature to mimic proxy user functionality e.g. provide 
> ticket cache based kerberos credentials (using KRB5CCNAME env variable) for 
> the proxy user and then in the java program pass name of the user to be 
> impersonated. The alternative, to use [proxy users 
> functionality|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html]
>  in Hadoop works as expected.
>  
> Since FileSystem.get(URI, Configuration, user) is a public API and it does 
> not restrict its usage in this fashion, we should ideally make it work or add 
> docs to discourage its usage to implement proxy users.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15758) Filesystem.get(URI, Configuration, user) API not working with proxy users

2018-09-17 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617937#comment-16617937
 ] 

Hrishikesh Gadre commented on HADOOP-15758:
---

I found that a only a small code change is required to support this proxy user 
scenario. Please find the attached patch. Please note that this is only for 
reference as I have not added any unit tests (just verified via manual testing) 
and hence not ready for commit.

 

> Filesystem.get(URI, Configuration, user) API not working with proxy users
> -
>
> Key: HADOOP-15758
> URL: https://issues.apache.org/jira/browse/HADOOP-15758
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Major
> Attachments: HADOOP-15758-001.patch
>
>
> A user reported that the Filesystem.get API is not working as expected when 
> they use the 'FileSystem.get(URI, Configuration, user)' method signature - 
> but 'FileSystem.get(URI, Configuration)' works fine. The user is trying to 
> use this method signature to mimic proxy user functionality e.g. provide 
> ticket cache based kerberos credentials (using KRB5CCNAME env variable) for 
> the proxy user and then in the java program pass name of the user to be 
> impersonated. The alternative, to use [proxy users 
> functionality|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html]
>  in Hadoop works as expected.
>  
> Since FileSystem.get(URI, Configuration, user) is a public API and it does 
> not restrict its usage in this fashion, we should ideally make it work or add 
> docs to discourage its usage to implement proxy users.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15758) Filesystem.get(URI, Configuration, user) API not working with proxy users

2018-09-17 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HADOOP-15758:
--
Attachment: HADOOP-15758-001.patch

> Filesystem.get(URI, Configuration, user) API not working with proxy users
> -
>
> Key: HADOOP-15758
> URL: https://issues.apache.org/jira/browse/HADOOP-15758
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Major
> Attachments: HADOOP-15758-001.patch
>
>
> A user reported that the Filesystem.get API is not working as expected when 
> they use the 'FileSystem.get(URI, Configuration, user)' method signature - 
> but 'FileSystem.get(URI, Configuration)' works fine. The user is trying to 
> use this method signature to mimic proxy user functionality e.g. provide 
> ticket cache based kerberos credentials (using KRB5CCNAME env variable) for 
> the proxy user and then in the java program pass name of the user to be 
> impersonated. The alternative, to use [proxy users 
> functionality|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html]
>  in Hadoop works as expected.
>  
> Since FileSystem.get(URI, Configuration, user) is a public API and it does 
> not restrict its usage in this fashion, we should ideally make it work or add 
> docs to discourage its usage to implement proxy users.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15760) Include Apache Commons Collections4

2018-09-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617934#comment-16617934
 ] 

Hadoop QA commented on HADOOP-15760:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15760 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940024/HADOOP-15760.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 229935e259be 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fdf5a3f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15206/testReport/ |
| Max. process+thread count | 300 (vs. ulimit of 1) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15206/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Include Apache Commons Collections4
> ---
>
> Key: HADOOP-15760
> URL: https://issues.apache.org/jira/browse/HADOOP-15760
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0, 3.0.3
>Reporter: BELUGA BEHR
>Priority: Major
> Attachments: HADOOP-15760.1.patch
>
>
> Please allow for use of Apache Commons Collections 4 library with the

[jira] [Commented] (HADOOP-15758) Filesystem.get(URI, Configuration, user) API not working with proxy users

2018-09-17 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617935#comment-16617935
 ] 

Hrishikesh Gadre commented on HADOOP-15758:
---

[~daryn] thanks for looking into this issue. I investigated this a little 
deeper and here is my understanding,
 * HADOOP-6769 added the FileSystem.get(URI, Configuration, user) API to 
support remote users. The idea was to have FileSystem implementation create a 
remote user if the user argument is non-null. In the user parameter is null, 
the logic was to use the currently logged in user. 
 * At some later point HDFS-3568 added an ability to obtain UGI using the 
provided ticket cache file path. As part of this patch, a new method 
"getBestUGI" was introduced in UserGroupInformation class. This method handles 
three cases separately (a) if a ticket cache path is specified, it uses the 
credentials to prepare UGI and ignores user argument (b) if a ticket cache path 
is not specified and user name is provided, it creates a remote user and (c) if 
the ticket cache path and user name are not specified, it uses the currently 
logged in user.

Now as I see it, HDFS-3568 introduced an additional possibility - application 
provide the user name as well as the ticket cache path. The question is should 
it treat this as a proxy user scenario? If this scenario is not valid, then we 
probably need to add documentation to discourage its use or even throw an error?

> Filesystem.get(URI, Configuration, user) API not working with proxy users
> -
>
> Key: HADOOP-15758
> URL: https://issues.apache.org/jira/browse/HADOOP-15758
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Major
>
> A user reported that the Filesystem.get API is not working as expected when 
> they use the 'FileSystem.get(URI, Configuration, user)' method signature - 
> but 'FileSystem.get(URI, Configuration)' works fine. The user is trying to 
> use this method signature to mimic proxy user functionality e.g. provide 
> ticket cache based kerberos credentials (using KRB5CCNAME env variable) for 
> the proxy user and then in the java program pass name of the user to be 
> impersonated. The alternative, to use [proxy users 
> functionality|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html]
>  in Hadoop works as expected.
>  
> Since FileSystem.get(URI, Configuration, user) is a public API and it does 
> not restrict its usage in this fashion, we should ideally make it work or add 
> docs to discourage its usage to implement proxy users.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15684) triggerActiveLogRoll stuck on dead name node, when ConnectTimeoutException happens.

2018-09-17 Thread Rong Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617870#comment-16617870
 ] 

Rong Tang commented on HADOOP-15684:


Thanks, [~elgoiri]

> triggerActiveLogRoll stuck on dead name node, when ConnectTimeoutException 
> happens. 
> 
>
> Key: HADOOP-15684
> URL: https://issues.apache.org/jira/browse/HADOOP-15684
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 3.0.0-alpha1
>Reporter: Rong Tang
>Assignee: Rong Tang
>Priority: Critical
> Attachments: 
> 0001-RollEditLog-try-next-NN-when-exception-happens.patch, 
> HADOOP-15684.000.patch, HADOOP-15684.001.patch, HADOOP-15684.002.patch, 
> HADOOP-15684.003.patch, HADOOP-15684.004.patch, 
> hadoop--rollingUpgrade-SourceMachine001.log
>
>
> When name node call triggerActiveLogRoll, and the cachedActiveProxy is a dead 
> name node, it will throws a ConnectTimeoutException, expected behavior is to 
> try next NN, but current logic doesn't do so, instead, it keeps trying the 
> dead, mistakenly take it as active.
>  
> 2018-08-17 10:02:12,001 WARN [Edit log tailer] 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a 
> roll of the active NN
> org.apache.hadoop.net.ConnectTimeoutException: Call From 
> SourceMachine001/SourceIP to001 TargetMachine001.ap.gbl:8020 failed on socket 
> timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 2 
> millis timeout 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298)
>  
> C:\Users\rotang>ping TargetMachine001
> Pinging TargetMachine001[TargetIP001] with 32 bytes of data:
>  Request timed out.
>  Request timed out.
>  Request timed out.
>  Request timed out.
>  Attachment is a log file saying how it repeatedly retries a dead name node, 
> and a fix patch.
>  I replaced the actual machine name/ip as SourceMachine001/SourceIP001 and 
> TargetMachine001/TargetIP001.
>  
> How to Repro:
> In a good running NNs, take down the active NN (don't let it come back during 
> test), and then the stand by NNs will keep trying dead (old active) NN, 
> because it is the cached one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15726) Create utility to limit frequency of log statements

2018-09-17 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617864#comment-16617864
 ] 

Chen Liang commented on HADOOP-15726:
-

Thanks for the clarification [~xkrogen]! I think I would prefer {{update()}}. 
+1 with this addressed.

> Create utility to limit frequency of log statements
> ---
>
> Key: HADOOP-15726
> URL: https://issues.apache.org/jira/browse/HADOOP-15726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, util
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-15726.000.patch, HADOOP-15726.001.patch
>
>
> There is a common pattern of logging a behavior that is normally extraneous. 
> Under some circumstances, such a behavior becomes common, flooding the logs 
> and making it difficult to see what else is going on in the system. Under 
> such situations it is beneficial to limit how frequently the extraneous 
> behavior is logged, while capturing some summary information about the 
> suppressed log statements.
> This is currently implemented in {{FSNamesystemLock}} (in HDFS-10713). We 
> have additional use cases for this in HDFS-13791, so this is a good time to 
> create a common utility for different sites to share this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15760) Include Apache Commons Collections4

2018-09-17 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15760:
-
Status: Patch Available  (was: Open)

> Include Apache Commons Collections4
> ---
>
> Key: HADOOP-15760
> URL: https://issues.apache.org/jira/browse/HADOOP-15760
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.3, 2.10.0
>Reporter: BELUGA BEHR
>Priority: Major
> Attachments: HADOOP-15760.1.patch
>
>
> Please allow for use of Apache Commons Collections 4 library with the end 
> goal of migrating from Commons Collects 3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >