[jira] [Commented] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645988#comment-16645988
 ] 

Dinesh Chitlangia commented on HADOOP-15785:


[~tasanuma0829] san - Thank you for review and commit.

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15785.001.patch, HADOOP-15785.002.patch, 
> HADOOP-15785.003.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
  Attachment: HADOOP-15124.001.patch
Target Version/s: 3.0.3, 3.1.0, 2.6.6, 2.10.0, 3.2.0, 2.9.2, 2.8.5, 2.7.8, 
3.0.4, 3.1.2  (was: 2.6.6, 3.1.0, 2.10.0, 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.8, 
3.0.4, 3.1.2)
  Status: Patch Available  (was: Open)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Attachment: (was: HADOOP-15124.001.patch)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Target Version/s: 3.0.3, 3.1.0, 2.6.6, 2.10.0, 3.2.0, 2.9.2, 2.8.5, 2.7.8, 
3.0.4, 3.1.2  (was: 2.6.6, 3.1.0, 2.10.0, 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.8, 
3.0.4, 3.1.2)
  Status: Open  (was: Patch Available)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645984#comment-16645984
 ] 

Hudson commented on HADOOP-15785:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15176 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15176/])
HADOOP-15785. [JDK10] Javadoc build fails on JDK 10 in hadoop-common. 
(tasanuma: rev 7b57f2f71fbaa5af4897309597cca70a95b04edd)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/Lz4Codec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/PseudoDelegationTokenAuthenticator.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/Bzip2Decompressor.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAAdmin.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/HadoopKerberosName.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/SshFenceByTcpPort.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/package-info.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/Serializer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/JsonSerialization.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/MBeans.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderBenchmark.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigurationWithLogging.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/AbstractDNSToSwitchMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/launcher/ServiceLauncher.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommandTypes.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/TFile.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryProxy.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/ScopedAclEntries.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2OutputStream.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RemoteException.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
* (edi

[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Attachment: (was: HADOOP-15124.001.patch)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
  Attachment: HADOOP-15124.001.patch
Target Version/s: 3.0.3, 3.1.0, 2.6.6, 2.10.0, 3.2.0, 2.9.2, 2.8.5, 2.7.8, 
3.0.4, 3.1.2  (was: 2.6.6, 3.1.0, 2.10.0, 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.8, 
3.0.4, 3.1.2)
  Status: Patch Available  (was: Open)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Target Version/s: 3.0.3, 3.1.0, 2.6.6, 2.10.0, 3.2.0, 2.9.2, 2.8.5, 2.7.8, 
3.0.4, 3.1.2  (was: 2.6.6, 3.1.0, 2.10.0, 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.8, 
3.0.4, 3.1.2)
  Status: Open  (was: Patch Available)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15841) ABFS: change createRemoteFileSystemDuringInitialization default to true

2018-10-10 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645977#comment-16645977
 ] 

Thomas Marquardt commented on HADOOP-15841:
---

Fine with me. The down side is that due to a typo, you might create a new 
filesystem instead of using the existing one. This is why we didn’t change the 
default to true. 

> ABFS: change createRemoteFileSystemDuringInitialization default to true
> ---
>
> Key: HADOOP-15841
> URL: https://issues.apache.org/jira/browse/HADOOP-15841
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> I haven't seen a way to create a working container (at least for the dfs 
> endpoint) except for setting 
> fs.azure.createRemoteFileSystemDuringInitialization=true. I personally don't 
> see that much of a downside to having it default to true, and it's a mild 
> inconvenience to remember to set it to true for some action to create a 
> container. I vaguely recall [~tmarquardt] considering changing this default 
> too.
> I propose we do it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-10 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645976#comment-16645976
 ] 

Takanobu Asanuma commented on HADOOP-15785:
---

Committed to trunk. Thank you very much for the contribution, 
[~dineshchitlangia]!

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15785.001.patch, HADOOP-15785.002.patch, 
> HADOOP-15785.003.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-10 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15785:
--
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15785.001.patch, HADOOP-15785.002.patch, 
> HADOOP-15785.003.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-10 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645968#comment-16645968
 ] 

Takanobu Asanuma commented on HADOOP-15785:
---

+1. Will commit it later.

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-15785.001.patch, HADOOP-15785.002.patch, 
> HADOOP-15785.003.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15841) ABFS: change createRemoteFileSystemDuringInitialization default to true

2018-10-10 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15841:
---
Summary: ABFS: change createRemoteFileSystemDuringInitialization default to 
true  (was: ABFS: )

> ABFS: change createRemoteFileSystemDuringInitialization default to true
> ---
>
> Key: HADOOP-15841
> URL: https://issues.apache.org/jira/browse/HADOOP-15841
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> I haven't seen a way to create a working container (at least for the dfs 
> endpoint) except for setting 
> fs.azure.createRemoteFileSystemDuringInitialization=true. I personally don't 
> see that much of a downside to having it default to true, and it's a mild 
> inconvenience to remember to set it to true for some action to create a 
> container. I vaguely recall [~tmarquardt] considering changing this default 
> too.
> I propose we do it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15841) ABFS:

2018-10-10 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-15841:
--

 Summary: ABFS: 
 Key: HADOOP-15841
 URL: https://issues.apache.org/jira/browse/HADOOP-15841
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sean Mackrory
Assignee: Sean Mackrory


I haven't seen a way to create a working container (at least for the dfs 
endpoint) except for setting 
fs.azure.createRemoteFileSystemDuringInitialization=true. I personally don't 
see that much of a downside to having it default to true, and it's a mild 
inconvenience to remember to set it to true for some action to create a 
container. I vaguely recall [~tmarquardt] considering changing this default too.

I propose we do it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645911#comment-16645911
 ] 

Sean Mackrory commented on HADOOP-15823:


Hmm... still in the situation where I can successfully get the token via MSI 
but the token isn't working, and I'm *pretty* sure my setup is identical to 
before, so I'm still concerned something's not quite right.

requestId on my last test was 06e5fbb3-101f-0073-6b12-61ff4500, in case 
that's helpful.

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645904#comment-16645904
 ] 

Thomas Marquardt commented on HADOOP-15823:
---

+1

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645870#comment-16645870
 ] 

Hadoop QA commented on HADOOP-15124:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
45s{color} | {color:green} root generated 0 new + 1325 unchanged - 2 fixed = 
1325 total (was 1327) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m  0s{color} | {color:orange} root: The patch generated 10 new + 120 unchanged 
- 0 fixed = 130 total (was 120) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}229m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15124 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943315/HADOOP-15124.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 667cdda078cd 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |

[jira] [Commented] (HADOOP-15708) Reading values from Configuration before adding deprecations make it impossible to read value with deprecated key

2018-10-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645849#comment-16645849
 ] 

Hudson commented on HADOOP-15708:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15175 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15175/])
HADOOP-15708. Reading values from Configuration before adding (rkanter: rev 
f261c319375c5a8c298338752ee77214c22f4e29)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Reading values from Configuration before adding deprecations make it 
> impossible to read value with deprecated key
> -
>
> Key: HADOOP-15708
> URL: https://issues.apache.org/jira/browse/HADOOP-15708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15708-testcase.patch, HADOOP-15708.001.patch, 
> HADOOP-15708.002.patch, HADOOP-15708.003.patch, HADOOP-15708.004.patch
>
>
> Hadoop Common contains a widely used Configuration class.
>  This class can handle deprecations of properties, e.g. if property 'A' gets 
> deprecated with an alternative property key 'B', users can access property 
> values with keys 'A' and 'B'.
>  Unfortunately, this does not work in one case.
>  When a config file is specified (for instance, XML) and a property is read 
> with the config.get() method, the config is loaded from the file at this 
> time. 
>  If the deprecation mapping is not yet specified by the time any config value 
> is retrieved and the XML config refers to a deprecated key, then the 
> deprecation mapping specified, the config value cannot be retrieved neither 
> with the deprecated nor with the new key.
>  The attached patch contains a testcase that reproduces this wrong behavior.
> Here are the steps outlined what the testcase does:
>  1. Creates an XML config file with a deprecated property
>  2. Adds the config to the Configuration object
>  3. Retrieves the config with its deprecated key (it does not really matter 
> which property the user gets, could be any)
>  4. Specifies the deprecation rules including the one defined in the config
>  5. Prints and asserts the property retrieved from the config with both the 
> deprecated and the new property keys.
> For reference, here is the log of one execution that actually shows what the 
> issue is:
> {noformat}
> Loaded items: 1
> Looked up property value with name hadoop.zk.address: null
> Looked up property value with name yarn.resourcemanager.zk-address: 
> dummyZkAddress
> Contents of config file: [, , 
> yarn.resourcemanager.zk-addressdummyZkAddress,
>  ]
> Looked up property value with name hadoop.zk.address: null
> 2018-08-31 10:10:06,484 INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1397)) - yarn.resourcemanager.zk-address 
> is deprecated. Instead, use hadoop.zk.address
> Looked up property value with name hadoop.zk.address: null
> Looked up property value with name hadoop.zk.address: null
> java.lang.AssertionError: 
> Expected :dummyZkAddress
> Actual   :null
> {noformat}
> *As it's visible from the output and the code, the issue is really that if 
> the config is retrieved either with the deprecated or the new value, 
> Configuration both wants to serve the value with the new key.*
>  *If the mapping is not specified before any retrieval happened, the value is 
> only stored under the deprecated key but not the new key.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645845#comment-16645845
 ] 

Hadoop QA commented on HADOOP-15823:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15823 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943332/HADOOP-15823-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d423fa4d16d2 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2bd000c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15348/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15348/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ABFS: Stop requiring client ID an

[jira] [Updated] (HADOOP-15708) Reading values from Configuration before adding deprecations make it impossible to read value with deprecated key

2018-10-10 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-15708:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~zsiegl] and everyone else for reviews.  Committed to trunk!

> Reading values from Configuration before adding deprecations make it 
> impossible to read value with deprecated key
> -
>
> Key: HADOOP-15708
> URL: https://issues.apache.org/jira/browse/HADOOP-15708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15708-testcase.patch, HADOOP-15708.001.patch, 
> HADOOP-15708.002.patch, HADOOP-15708.003.patch, HADOOP-15708.004.patch
>
>
> Hadoop Common contains a widely used Configuration class.
>  This class can handle deprecations of properties, e.g. if property 'A' gets 
> deprecated with an alternative property key 'B', users can access property 
> values with keys 'A' and 'B'.
>  Unfortunately, this does not work in one case.
>  When a config file is specified (for instance, XML) and a property is read 
> with the config.get() method, the config is loaded from the file at this 
> time. 
>  If the deprecation mapping is not yet specified by the time any config value 
> is retrieved and the XML config refers to a deprecated key, then the 
> deprecation mapping specified, the config value cannot be retrieved neither 
> with the deprecated nor with the new key.
>  The attached patch contains a testcase that reproduces this wrong behavior.
> Here are the steps outlined what the testcase does:
>  1. Creates an XML config file with a deprecated property
>  2. Adds the config to the Configuration object
>  3. Retrieves the config with its deprecated key (it does not really matter 
> which property the user gets, could be any)
>  4. Specifies the deprecation rules including the one defined in the config
>  5. Prints and asserts the property retrieved from the config with both the 
> deprecated and the new property keys.
> For reference, here is the log of one execution that actually shows what the 
> issue is:
> {noformat}
> Loaded items: 1
> Looked up property value with name hadoop.zk.address: null
> Looked up property value with name yarn.resourcemanager.zk-address: 
> dummyZkAddress
> Contents of config file: [, , 
> yarn.resourcemanager.zk-addressdummyZkAddress,
>  ]
> Looked up property value with name hadoop.zk.address: null
> 2018-08-31 10:10:06,484 INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1397)) - yarn.resourcemanager.zk-address 
> is deprecated. Instead, use hadoop.zk.address
> Looked up property value with name hadoop.zk.address: null
> Looked up property value with name hadoop.zk.address: null
> java.lang.AssertionError: 
> Expected :dummyZkAddress
> Actual   :null
> {noformat}
> *As it's visible from the output and the code, the issue is really that if 
> the config is retrieved either with the deprecated or the new value, 
> Configuration both wants to serve the value with the new key.*
>  *If the mapping is not specified before any retrieval happened, the value is 
> only stored under the deprecated key but not the new key.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60

2018-10-10 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645768#comment-16645768
 ] 

Robert Kanter commented on HADOOP-15832:


Good point [~ste...@apache.org], I hadn't thought about that.  It looks like we 
already have a notification about crypto export stuff in the README.txt 
([https://github.com/apache/hadoop/blob/trunk/README.txt).]) and we need to 
simply append some details to the bottom, right?
{noformat}
...
The following provides more details on the included cryptographic
software:
  Hadoop Core uses the SSL libraries from the Jetty project written 
by mortbay.org.
  Hadoop Yarn Server Web Proxy uses the BouncyCastle Java
cryptography APIs written by the Legion of the Bouncy Castle Inc.
{noformat}
[~ste...@apache.org], does that sound good?  Anything else that's needed?  I 
can make an addendum patch.
 

> Upgrade BouncyCastle to 1.60
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15832.001.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15837) DynamoDB table Update can fail S3A FS init

2018-10-10 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645750#comment-16645750
 ] 

Sean Mackrory edited comment on HADOOP-15837 at 10/11/18 12:14 AM:
---

Thanks for the fix [~ste...@apache.org] - the problem and solution make sense.

Being a little nit-picky, but can we rename maybeUnwrap to something a little 
more descriptive like getNestedSDKException, and just call it once instead of 
twice? There's a lot of exception translation going on and I think that'd make 
a big difference to understandability of this code.

Other than that I'm a +1. I did not get a chance to run tests myself today, but 
feel free to commit before I do since you have.


was (Author: mackrorysd):
Thanks for the fix [~ste...@apache.org] - the problem and solution make sense.

Being a little nit-picky, but can we rename maybeUnwrap to something a little 
more descriptive like getNestedSDKException, and just call it once instead of 
twice?

Other than that I'm a +1. I did not get a chance to run tests myself today, but 
feel free to commit before I do since you have.

> DynamoDB table Update can fail S3A FS init
> --
>
> Key: HADOOP-15837
> URL: https://issues.apache.org/jira/browse/HADOOP-15837
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: s3guard test with small capacity (10) but autoscale 
> enabled & multiple consecutive parallel test runs executed...this seems to 
> have been enough load to trigger the state change
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15837-001.patch, HADOOP-15837-002.patch
>
>
> When DDB autoscales a table, it goes into an UPDATING state. The 
> waitForTableActive operation in the AWS SDK doesn't seem to wait long enough 
> for this to recover. We need to catch & retry



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15837) DynamoDB table Update can fail S3A FS init

2018-10-10 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645750#comment-16645750
 ] 

Sean Mackrory commented on HADOOP-15837:


Thanks for the fix [~ste...@apache.org] - the problem and solution make sense.

Being a little nit-picky, but can we rename maybeUnwrap to something a little 
more descriptive like getNestedSDKException, and just call it once instead of 
twice?

Other than that I'm a +1. I did not get a chance to run tests myself today, but 
feel free to commit before I do since you have.

> DynamoDB table Update can fail S3A FS init
> --
>
> Key: HADOOP-15837
> URL: https://issues.apache.org/jira/browse/HADOOP-15837
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: s3guard test with small capacity (10) but autoscale 
> enabled & multiple consecutive parallel test runs executed...this seems to 
> have been enough load to trigger the state change
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15837-001.patch, HADOOP-15837-002.patch
>
>
> When DDB autoscales a table, it goes into an UPDATING state. The 
> waitForTableActive operation in the AWS SDK doesn't seem to wait long enough 
> for this to recover. We need to catch & retry



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645748#comment-16645748
 ] 

Da Zhou commented on HADOOP-15823:
--

[~mackrorysd], The 001 patch was pointing a wrong resource url, and it is based 
on the azure documentation(which might not be 100% correct).

Since it works for you when you provided tenant id and client id, I am 
submitting a 002 patch, which only removes the null check for tenantId and 
clientId(the rest logic remains the same to ADL) , could you try this? Sorry 
for the inconvenience...

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15823:
-
Attachment: HADOOP-15823-002.patch

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645743#comment-16645743
 ] 

Sean Mackrory commented on HADOOP-15823:


Thanks for the quick fix, [~DanielZhou]. Given the need to set up Azure 
infrastructure to test this without ridiculous levels of mocking, I'm fine to 
ignore Yetus' warning on the lack of tests.

I tried this out in my environment with a user-assigned managed identity that 
worked before. The set up is identical to what I tried before, I believe, in 
which I was successfully able to interact with the account once I had set the 
client ID and tenant ID. This time I am able to get a valid-looking bearer 
token, but I get 403's when using it on subsequent requests. Will need to 
investigate more...

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645735#comment-16645735
 ] 

Hadoop QA commented on HADOOP-15821:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 48 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies hadoop-common-project 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-client-modules/hadoop-client-minicluster . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
4s{color} | {color:green} root generated 0 new + 1322 unchanged - 5 fixed = 
1322 total (was 1327) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
37s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 14 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
17s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies hadoop-common-project 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry 
hadoop-client-modules/hadoop-client-minicluster . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}170m  2s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:

[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645723#comment-16645723
 ] 

Hadoop QA commented on HADOOP-15821:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 48 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies hadoop-common-project 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-client-modules/hadoop-client-minicluster . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 37m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
2s{color} | {color:green} root generated 0 new + 1322 unchanged - 5 fixed = 
1322 total (was 1327) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
35s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 14 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
13s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies hadoop-common-project 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry 
hadoop-client-modules/hadoop-client-minicluster . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}143m 30s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:

[jira] [Comment Edited] (HADOOP-15717) TGT renewal thread does not log IOException

2018-10-10 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645635#comment-16645635
 ] 

Xiao Chen edited comment on HADOOP-15717 at 10/10/18 10:34 PM:
---

Thanks [~snemeth] for the new rev, and thanks [~rkanter] for reading my mind. :)

The logging _can_ be done with slf4j parameterized logging. See 
https://www.slf4j.org/apidocs/org/slf4j/Logger.html

For example, the following will log ie into the message with its stacktrace.
{noformat}
LOG.error("TGT is destroyed. Aborting renew thread for {}.", 
getUserName(), ie);
{noformat}


was (Author: xiaochen):
Thanks [~snemeth] for the new rev.

The logging _can_ be done with slf4j parameterized logging. See 
https://www.slf4j.org/apidocs/org/slf4j/Logger.html

For example, the following will log ie into the message with its stacktrace.
{noformat}
LOG.error("TGT is destroyed. Aborting renew thread for {}.", 
getUserName(), ie);
{noformat}

> TGT renewal thread does not log IOException
> ---
>
> Key: HADOOP-15717
> URL: https://issues.apache.org/jira/browse/HADOOP-15717
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15717.001.patch, HADOOP-15717.002.patch
>
>
> I came across a case where tgt.getEndTime() was returned null and it resulted 
> in an NPE, this observation was popped out of a test suite execution on a 
> cluster. The reason for logging the {{IOException}} is that it helps to 
> troubleshoot what caused the exception, as it can come from two different 
> calls from the try-catch.
> I can see that [~gabor.bota] handled this with HADOOP-15593, but apart from 
> logging the fact that the ticket's {{endDate}} was null, we have not logged 
> the exception at all.
> With the current code, the exception is swallowed and the thread terminates 
> in case the ticket's {{endDate}} is null. 
> As this can happen with OpenJDK for example, it is required to print the 
> exception (stack trace, message) to the log.
> The code should be updated here: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L918



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15717) TGT renewal thread does not log IOException

2018-10-10 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645635#comment-16645635
 ] 

Xiao Chen commented on HADOOP-15717:


Thanks [~snemeth] for the new rev.

The logging _can_ be done with slf4j parameterized logging. See 
https://www.slf4j.org/apidocs/org/slf4j/Logger.html

For example, the following will log ie into the message with its stacktrace.
{noformat}
LOG.error("TGT is destroyed. Aborting renew thread for {}.", 
getUserName(), ie);
{noformat}

> TGT renewal thread does not log IOException
> ---
>
> Key: HADOOP-15717
> URL: https://issues.apache.org/jira/browse/HADOOP-15717
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15717.001.patch, HADOOP-15717.002.patch
>
>
> I came across a case where tgt.getEndTime() was returned null and it resulted 
> in an NPE, this observation was popped out of a test suite execution on a 
> cluster. The reason for logging the {{IOException}} is that it helps to 
> troubleshoot what caused the exception, as it can come from two different 
> calls from the try-catch.
> I can see that [~gabor.bota] handled this with HADOOP-15593, but apart from 
> logging the fact that the ticket's {{endDate}} was null, we have not logged 
> the exception at all.
> With the current code, the exception is swallowed and the thread terminates 
> in case the ticket's {{endDate}} is null. 
> As this can happen with OpenJDK for example, it is required to print the 
> exception (stack trace, message) to the log.
> The code should be updated here: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L918



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Igor Dvorzhak (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645622#comment-16645622
 ] 

Igor Dvorzhak commented on HADOOP-15124:


[~xkrogen] Sure, I would like to push it forward.

I just updated patch and PR after rebase on trunk and will happily address 
review comments to get it merged.

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
  Attachment: HADOOP-15124.001.patch
Target Version/s: 3.0.3, 3.1.0, 2.6.6, 2.10.0, 3.2.0, 2.9.2, 2.8.5, 2.7.8, 
3.0.4, 3.1.2  (was: 2.6.6, 3.1.0, 2.10.0, 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.8, 
3.0.4, 3.1.2)
  Status: Patch Available  (was: Open)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Attachment: (was: HADOOP-15124.001.patch)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Target Version/s: 3.0.3, 3.1.0, 2.6.6, 2.10.0, 3.2.0, 2.9.2, 2.8.5, 2.7.8, 
3.0.4, 3.1.2  (was: 2.6.6, 3.1.0, 2.10.0, 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.8, 
3.0.4, 3.1.2)
  Status: Open  (was: Patch Available)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15813) Enable more reliable SSL connection reuse

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645573#comment-16645573
 ] 

Hadoop QA commented on HADOOP-15813:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15813 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943298/HADOOP-15813.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d3d64b599eba 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bf3d591 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15345/testReport/ |
| Max. process+thread count | 1517 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15345/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Enable more reli

[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645521#comment-16645521
 ] 

Hadoop QA commented on HADOOP-15823:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15823 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943297/HADOOP-15823-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux da2ffa50b220 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bf3d591 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15346/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15346/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ABFS: Stop requiring client ID and

[jira] [Comment Edited] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645468#comment-16645468
 ] 

Da Zhou edited comment on HADOOP-15823 at 10/10/18 7:50 PM:


Sorry for this late reply.
 As [~tmarquardt] mentioned, the current MSI token implementation in ABFS is 
not exactly the same as the 
[documentation|https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-http].
 I've updated the implementation and removed unnecessary query parameters by 
following the official documentation. [~mackrorysd] could you try this patch at 
your end?


was (Author: danielzhou):
Sorry for this late reply.
As Thomas mentioned, the current MSI  token implementation in ABFS  is not 
exactly the same as the 
[documentation|https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-http].
I've updated the implementation and removed unnecessary query parameters by 
following the official documentation. [~mackrorysd] could you try this patch at 
your end?

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15823:
-
Status: Patch Available  (was: Open)

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645468#comment-16645468
 ] 

Da Zhou commented on HADOOP-15823:
--

Sorry for this late reply.
As Thomas mentioned, the current MSI  token implementation in ABFS  is not 
exactly the same as the 
[documentation|https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-http].
I've updated the implementation and removed unnecessary query parameters by 
following the official documentation. [~mackrorysd] could you try this patch at 
your end?

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15813) Enable more reliable SSL connection reuse

2018-10-10 Thread Daryn Sharp (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-15813:
-
Attachment: HADOOP-15813.patch

> Enable more reliable SSL connection reuse
> -
>
> Key: HADOOP-15813
> URL: https://issues.apache.org/jira/browse/HADOOP-15813
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15813.patch, HADOOP-15813.patch
>
>
> The java keep-alive cache relies on instance equivalence of the SSL socket 
> factory.  In many java versions, SSLContext#getSocketFactory always returns a 
> new instance which completely breaks the cache.  Clients flooding a service 
> with lingering per-request connections that can lead to port exhaustion.  The 
> hadoop SSLFactory should cache the socket factory associated with the context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15813) Enable more reliable SSL connection reuse

2018-10-10 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645464#comment-16645464
 ] 

Daryn Sharp commented on HADOOP-15813:
--

{quote}for the sake of consistency shouldn't be the SSL Server Socket Factory 
cached as well?
{quote}
Good question.  I didn't dig into the bowels of the jdk to see if what impact 
if any that would have.  I've been trying to surgically address proven issues.  
Touching the server side with no proof anything is wrong seems to only add risk.

> Enable more reliable SSL connection reuse
> -
>
> Key: HADOOP-15813
> URL: https://issues.apache.org/jira/browse/HADOOP-15813
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15813.patch
>
>
> The java keep-alive cache relies on instance equivalence of the SSL socket 
> factory.  In many java versions, SSLContext#getSocketFactory always returns a 
> new instance which completely breaks the cache.  Clients flooding a service 
> with lingering per-request connections that can lead to port exhaustion.  The 
> hadoop SSLFactory should cache the socket factory associated with the context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645456#comment-16645456
 ] 

Hadoop QA commented on HADOOP-15124:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-15124 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15124 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928853/HADOOP-15124.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15344/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-10 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15823:
-
Attachment: HADOOP-15823-001.patch

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-10 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645436#comment-16645436
 ] 

Erik Krogen commented on HADOOP-15124:
--

Hi [~medb], [~ste...@apache.org], are the two of you still planning to push 
forward this work? We may be interested in helping as well. Please let me know, 
thanks.

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15125) Complete integration of new StorageStatistics

2018-10-10 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645432#comment-16645432
 ] 

Erik Krogen commented on HADOOP-15125:
--

Thanks for the update [~liuml07]... If HADOOP-15124 is a prerequisite for the 
other patches I will start there.

> Complete integration of new StorageStatistics
> -
>
> Key: HADOOP-15125
> URL: https://issues.apache.org/jira/browse/HADOOP-15125
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> HADOOP-13065 added the new StorageStatistics API. But there's a couple of 
> subtasks remaining, and we are getting more experience of using it.
> This JIRA covers the task of pulling those patches in, evolving what we have, 
> targeting, realistically, v 3.2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-10 Thread Jonathan Eagles (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645426#comment-16645426
 ] 

Jonathan Eagles commented on HADOOP-15815:
--

Seems like 9.3.24 addresses the security concern and don't see any compelling 
reason to move beyond.

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15125) Complete integration of new StorageStatistics

2018-10-10 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645391#comment-16645391
 ] 

Mingliang Liu commented on HADOOP-15125:


[~xkrogen], I was following [HADOOP-15124] and hopefully there will be a 
refined implementation of FS.Statistics. After that, we can update the 
[HADOOP-13435] and [HADOOP-13032]. I don't have ETA when  [HADOOP-15124] is 
ready to commit. If you are interested, feel free to pick up any work. I can 
help review. Thanks,

> Complete integration of new StorageStatistics
> -
>
> Key: HADOOP-15125
> URL: https://issues.apache.org/jira/browse/HADOOP-15125
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> HADOOP-13065 added the new StorageStatistics API. But there's a couple of 
> subtasks remaining, and we are getting more experience of using it.
> This JIRA covers the task of pulling those patches in, evolving what we have, 
> targeting, realistically, v 3.2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15839) Review + update cloud store sensitive keys in hadoop.security.sensitive-config-keys

2018-10-10 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt reopened HADOOP-15839:
---

Looks like we missed things like "fs.azure.account.oauth2.client.secret".  See 
my earlier comment in this JIRA.

> Review + update cloud store sensitive keys in 
> hadoop.security.sensitive-config-keys
> ---
>
> Key: HADOOP-15839
> URL: https://issues.apache.org/jira/browse/HADOOP-15839
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HADOOP-15839-001.patch
>
>
> Make sure that {{hadoop.security.sensitive-config-keys}} is up to date with 
> all cloud store options, including
> h3. s3a:
> * s3a per-bucket secrets
> * s3a session tokens
> h3: abfs
> * {{fs.azure.account.oauth2.client.secret}}
> h3. adls
> fs.adl.oauth2.credential
> fs.adl.oauth2.refresh.token



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15125) Complete integration of new StorageStatistics

2018-10-10 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645383#comment-16645383
 ] 

Erik Krogen commented on HADOOP-15125:
--

Ping [~ste...@apache.org], [~liuml07] - is anyone planning on continuing this 
work?

> Complete integration of new StorageStatistics
> -
>
> Key: HADOOP-15125
> URL: https://issues.apache.org/jira/browse/HADOOP-15125
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> HADOOP-13065 added the new StorageStatistics API. But there's a couple of 
> subtasks remaining, and we are getting more experience of using it.
> This JIRA covers the task of pulling those patches in, evolving what we have, 
> targeting, realistically, v 3.2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15676) Cleanup TestSSLHttpServer

2018-10-10 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645310#comment-16645310
 ] 

Xiao Chen commented on HADOOP-15676:


Thanks for revving, [~snemeth].

I understand it's existing code and you're just fixing a typo out of the 
message, and thanks for doing that. But since we're changing it, let's try to 
improve it for good.

IMO that try-catch-fail is bad, because only the exception message will be 
logged without the actual stack trace. To me, that's less information because 
otherwise one can just get to the failure point by looking at the stack trace - 
now this has to be done by text search and log correlation. The message itself 
doesn't seem to have substantial information either. If you'd like more 
information, we can log the ciphers before calling into the function.

pre-commits are gone but I think the checkstyle is complaining about 
{{oneEnabledCiphers}} and {{excludedCiphers}} not meeting the constant naming 
regex (needs to be upper case)

> Cleanup TestSSLHttpServer
> -
>
> Key: HADOOP-15676
> URL: https://issues.apache.org/jira/browse/HADOOP-15676
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: HADOOP-15676.001.patch, HADOOP-15676.002.patch, 
> HADOOP-15676.003.patch
>
>
> This issue will fix: 
> * Several typos in this class
> * Code is not very well readable in some of the places.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15840) Slow RPC logger too spammy

2018-10-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645296#comment-16645296
 ] 

Anu Engineer commented on HADOOP-15840:
---

good to know, may the microsecond precision approach will yield better results. 
But 4th dev. is easier to do and test :)

 

> Slow RPC logger too spammy
> --
>
> Key: HADOOP-15840
> URL: https://issues.apache.org/jira/browse/HADOOP-15840
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> HADOOP-12325 added a capability where "slow" RPCs are logged in NN log when 
> ipc.server.log.slow.rpc is enabled.
> The "slow" RPCs are supposed to be those whose processing time is outside 3 
> standard deviation, and it supposed to account for 0.3% of total RPCs.
> However, I found in practice, NN marks more than 1% of total RPCs as slow, 
> and I've seen RPCs whose processing time is 1ms declared slow too.
> {noformat}
> 2018-10-08 01:48:33,203 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.199.16:56645
> 2018-10-08 01:48:33,219 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.190.44:36435
> 2018-10-08 01:48:33,308 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.190.43:56530
> {noformat}
> This is too many. 1% means NN spits hundreds slow RPCs per second on average 
> in NN log.
> How about:
>  # use 4 stddev?
>  # use microsecond precision. Majority of RPCs takes less than 1 millisecond 
> anyway. It makes the stddev calculation imprecise. An RPC could be calculated 
> to spend 1 millisecond and be marked as "slow", since it starts from one 
> millisecond and ends in the next millisecond.
>  
> [~anu] any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15840) Slow RPC logger too spammy

2018-10-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645296#comment-16645296
 ] 

Anu Engineer edited comment on HADOOP-15840 at 10/10/18 5:20 PM:
-

good to know, may be the microsecond precision approach will yield better 
results. But 4th dev. is easier to do and test :)

 


was (Author: anu):
good to know, may the microsecond precision approach will yield better results. 
But 4th dev. is easier to do and test :)

 

> Slow RPC logger too spammy
> --
>
> Key: HADOOP-15840
> URL: https://issues.apache.org/jira/browse/HADOOP-15840
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> HADOOP-12325 added a capability where "slow" RPCs are logged in NN log when 
> ipc.server.log.slow.rpc is enabled.
> The "slow" RPCs are supposed to be those whose processing time is outside 3 
> standard deviation, and it supposed to account for 0.3% of total RPCs.
> However, I found in practice, NN marks more than 1% of total RPCs as slow, 
> and I've seen RPCs whose processing time is 1ms declared slow too.
> {noformat}
> 2018-10-08 01:48:33,203 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.199.16:56645
> 2018-10-08 01:48:33,219 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.190.44:36435
> 2018-10-08 01:48:33,308 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.190.43:56530
> {noformat}
> This is too many. 1% means NN spits hundreds slow RPCs per second on average 
> in NN log.
> How about:
>  # use 4 stddev?
>  # use microsecond precision. Majority of RPCs takes less than 1 millisecond 
> anyway. It makes the stddev calculation imprecise. An RPC could be calculated 
> to spend 1 millisecond and be marked as "slow", since it starts from one 
> millisecond and ends in the next millisecond.
>  
> [~anu] any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15840) Slow RPC logger too spammy

2018-10-10 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645293#comment-16645293
 ] 

Wei-Chiu Chuang commented on HADOOP-15840:
--

Yeah I thought about the same but I kept seeing the same 1ms slow RPC despite 
several days after NN restart.

> Slow RPC logger too spammy
> --
>
> Key: HADOOP-15840
> URL: https://issues.apache.org/jira/browse/HADOOP-15840
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> HADOOP-12325 added a capability where "slow" RPCs are logged in NN log when 
> ipc.server.log.slow.rpc is enabled.
> The "slow" RPCs are supposed to be those whose processing time is outside 3 
> standard deviation, and it supposed to account for 0.3% of total RPCs.
> However, I found in practice, NN marks more than 1% of total RPCs as slow, 
> and I've seen RPCs whose processing time is 1ms declared slow too.
> {noformat}
> 2018-10-08 01:48:33,203 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.199.16:56645
> 2018-10-08 01:48:33,219 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.190.44:36435
> 2018-10-08 01:48:33,308 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.190.43:56530
> {noformat}
> This is too many. 1% means NN spits hundreds slow RPCs per second on average 
> in NN log.
> How about:
>  # use 4 stddev?
>  # use microsecond precision. Majority of RPCs takes less than 1 millisecond 
> anyway. It makes the stddev calculation imprecise. An RPC could be calculated 
> to spend 1 millisecond and be marked as "slow", since it starts from one 
> millisecond and ends in the next millisecond.
>  
> [~anu] any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-10 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15821:
-
Attachment: HADOOP-15821.009.patch

> Move Hadoop YARN Registry to Hadoop Registry
> 
>
> Key: HADOOP-15821
> URL: https://issues.apache.org/jira/browse/HADOOP-15821
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, 
> HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, 
> HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, 
> HADOOP-15821.008.patch, HADOOP-15821.009.patch
>
>
> Currently, Hadoop YARN Registry is in YARN. However, this can be used by 
> other parts of the project (e.g., HDFS). In addition, it does not have any 
> real dependency to YARN.
> We should move it into commons and make it Hadoop Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-10 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15821:
-
Attachment: (was: HADOOP-15821.008.patch)

> Move Hadoop YARN Registry to Hadoop Registry
> 
>
> Key: HADOOP-15821
> URL: https://issues.apache.org/jira/browse/HADOOP-15821
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, 
> HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, 
> HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, 
> HADOOP-15821.008.patch
>
>
> Currently, Hadoop YARN Registry is in YARN. However, this can be used by 
> other parts of the project (e.g., HDFS). In addition, it does not have any 
> real dependency to YARN.
> We should move it into commons and make it Hadoop Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-10 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15821:
-
Attachment: HADOOP-15821.008.patch

> Move Hadoop YARN Registry to Hadoop Registry
> 
>
> Key: HADOOP-15821
> URL: https://issues.apache.org/jira/browse/HADOOP-15821
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, 
> HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, 
> HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, 
> HADOOP-15821.008.patch
>
>
> Currently, Hadoop YARN Registry is in YARN. However, this can be used by 
> other parts of the project (e.g., HDFS). In addition, it does not have any 
> real dependency to YARN.
> We should move it into commons and make it Hadoop Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15840) Slow RPC logger too spammy

2018-10-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645276#comment-16645276
 ] 

Anu Engineer edited comment on HADOOP-15840 at 10/10/18 5:13 PM:
-

[~jojochuang] Thanks for tagging me. I am plus one either of these approaches. 
We might have to test and figure out what is a better approach.

Since I don't have the full context, it might also be that we are looking at a 
very small samples

in Server.java#logSlowRpcCalls – final int minSampleSize=1024; if you seeing 
this right at the start of the system – that could be an issue too.


was (Author: anu):
[~jojochuang] Thanks for tagging me. I am plus one either of these approaches. 
We might have to test and figure out what is a better approach.

> Slow RPC logger too spammy
> --
>
> Key: HADOOP-15840
> URL: https://issues.apache.org/jira/browse/HADOOP-15840
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> HADOOP-12325 added a capability where "slow" RPCs are logged in NN log when 
> ipc.server.log.slow.rpc is enabled.
> The "slow" RPCs are supposed to be those whose processing time is outside 3 
> standard deviation, and it supposed to account for 0.3% of total RPCs.
> However, I found in practice, NN marks more than 1% of total RPCs as slow, 
> and I've seen RPCs whose processing time is 1ms declared slow too.
> {noformat}
> 2018-10-08 01:48:33,203 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.199.16:56645
> 2018-10-08 01:48:33,219 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.190.44:36435
> 2018-10-08 01:48:33,308 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.190.43:56530
> {noformat}
> This is too many. 1% means NN spits hundreds slow RPCs per second on average 
> in NN log.
> How about:
>  # use 4 stddev?
>  # use microsecond precision. Majority of RPCs takes less than 1 millisecond 
> anyway. It makes the stddev calculation imprecise. An RPC could be calculated 
> to spend 1 millisecond and be marked as "slow", since it starts from one 
> millisecond and ends in the next millisecond.
>  
> [~anu] any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15840) Slow RPC logger too spammy

2018-10-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645276#comment-16645276
 ] 

Anu Engineer commented on HADOOP-15840:
---

[~jojochuang] Thanks for tagging me. I am plus one either of these approaches. 
We might have to test and figure out what is a better approach.

> Slow RPC logger too spammy
> --
>
> Key: HADOOP-15840
> URL: https://issues.apache.org/jira/browse/HADOOP-15840
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> HADOOP-12325 added a capability where "slow" RPCs are logged in NN log when 
> ipc.server.log.slow.rpc is enabled.
> The "slow" RPCs are supposed to be those whose processing time is outside 3 
> standard deviation, and it supposed to account for 0.3% of total RPCs.
> However, I found in practice, NN marks more than 1% of total RPCs as slow, 
> and I've seen RPCs whose processing time is 1ms declared slow too.
> {noformat}
> 2018-10-08 01:48:33,203 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.199.16:56645
> 2018-10-08 01:48:33,219 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.190.44:36435
> 2018-10-08 01:48:33,308 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
> sendHeartbeat took 1 milliseconds to process from client 10.17.190.43:56530
> {noformat}
> This is too many. 1% means NN spits hundreds slow RPCs per second on average 
> in NN log.
> How about:
>  # use 4 stddev?
>  # use microsecond precision. Majority of RPCs takes less than 1 millisecond 
> anyway. It makes the stddev calculation imprecise. An RPC could be calculated 
> to spend 1 millisecond and be marked as "slow", since it starts from one 
> millisecond and ends in the next millisecond.
>  
> [~anu] any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15839) Review + update cloud store sensitive keys in hadoop.security.sensitive-config-keys

2018-10-10 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645248#comment-16645248
 ] 

Thomas Marquardt commented on HADOOP-15839:
---

I am not familiar with  hadoop.security.sensitive-config-keys.  What does it 
do?  Seems it would be better to use  a key vault.  Next best alternative would 
be using XML mark-up to identify and encrypt the sensitive keys, for example, 
use  instead of . 

For ADL, WASB, and ABFS the sensitive keys include those with "oauth" and 
"account" in the configuration property names.  The regex that you are 
currently using do not catch all of them. 

> Review + update cloud store sensitive keys in 
> hadoop.security.sensitive-config-keys
> ---
>
> Key: HADOOP-15839
> URL: https://issues.apache.org/jira/browse/HADOOP-15839
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HADOOP-15839-001.patch
>
>
> Make sure that {{hadoop.security.sensitive-config-keys}} is up to date with 
> all cloud store options, including
> h3. s3a:
> * s3a per-bucket secrets
> * s3a session tokens
> h3: abfs
> * {{fs.azure.account.oauth2.client.secret}}
> h3. adls
> fs.adl.oauth2.credential
> fs.adl.oauth2.refresh.token



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15839) Review + update cloud store sensitive keys in hadoop.security.sensitive-config-keys

2018-10-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645240#comment-16645240
 ] 

Hudson commented on HADOOP-15839:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15170 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15170/])
HADOOP-15839. Review + update cloud store sensitive keys in (stevel: rev 
cdc4350718055189fef8c70e31314607001d4009)
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Review + update cloud store sensitive keys in 
> hadoop.security.sensitive-config-keys
> ---
>
> Key: HADOOP-15839
> URL: https://issues.apache.org/jira/browse/HADOOP-15839
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HADOOP-15839-001.patch
>
>
> Make sure that {{hadoop.security.sensitive-config-keys}} is up to date with 
> all cloud store options, including
> h3. s3a:
> * s3a per-bucket secrets
> * s3a session tokens
> h3: abfs
> * {{fs.azure.account.oauth2.client.secret}}
> h3. adls
> fs.adl.oauth2.credential
> fs.adl.oauth2.refresh.token



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15839) Review + update cloud store sensitive keys in hadoop.security.sensitive-config-keys

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645193#comment-16645193
 ] 

Hadoop QA commented on HADOOP-15839:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
49m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15839 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943252/HADOOP-15839-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux bd0b0c0e2cc7 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cd28051 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15341/testReport/ |
| Max. process+thread count | 1513 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15341/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Review + update cloud store sensitive keys in 
> hadoop.security.sensitive-config-keys
> ---
>
> Key: HADOOP-15839
> URL: https://issues.apache.org/jira/browse/HADOOP-15839
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.2.0
>R

[jira] [Commented] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645191#comment-16645191
 ] 

Hadoop QA commented on HADOOP-15785:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 3941 unchanged - 46 fixed = 3941 total (was 3987) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943247/HADOOP-15785.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f085891d55e5 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cd28051 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15340/testReport/ |
| Max. process+thread count | 1502 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15340/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [JDK10] Javadoc build fails on

[jira] [Created] (HADOOP-15840) Slow RPC logger too spammy

2018-10-10 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-15840:


 Summary: Slow RPC logger too spammy
 Key: HADOOP-15840
 URL: https://issues.apache.org/jira/browse/HADOOP-15840
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha1, 2.7.4, 2.8.0
Reporter: Wei-Chiu Chuang


HADOOP-12325 added a capability where "slow" RPCs are logged in NN log when 
ipc.server.log.slow.rpc is enabled.

The "slow" RPCs are supposed to be those whose processing time is outside 3 
standard deviation, and it supposed to account for 0.3% of total RPCs.

However, I found in practice, NN marks more than 1% of total RPCs as slow, and 
I've seen RPCs whose processing time is 1ms declared slow too.
{noformat}
2018-10-08 01:48:33,203 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
sendHeartbeat took 1 milliseconds to process from client 10.17.199.16:56645
2018-10-08 01:48:33,219 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
sendHeartbeat took 1 milliseconds to process from client 10.17.190.44:36435
2018-10-08 01:48:33,308 WARN org.apache.hadoop.ipc.Server: Slow RPC : 
sendHeartbeat took 1 milliseconds to process from client 10.17.190.43:56530
{noformat}
This is too many. 1% means NN spits hundreds slow RPCs per second on average in 
NN log.

How about:
 # use 4 stddev?
 # use microsecond precision. Majority of RPCs takes less than 1 millisecond 
anyway. It makes the stddev calculation imprecise. An RPC could be calculated 
to spend 1 millisecond and be marked as "slow", since it starts from one 
millisecond and ends in the next millisecond.

 

[~anu] any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645192#comment-16645192
 ] 

Hadoop QA commented on HADOOP-15679:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 115 unchanged - 1 fixed = 116 total (was 116) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 39s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
52s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | HADOOP-15679 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943234/HADOOP-15679-branch-2-004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux f397dcd7f448 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / cc1bf7f |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15339/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15339/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15339/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15339/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1485 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15339/console |
| Powered by | Apache Yetus 0.

[jira] [Updated] (HADOOP-15839) Review + update cloud store sensitive keys in hadoop.security.sensitive-config-keys

2018-10-10 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15839:

   Resolution: Fixed
 Assignee: Steve Loughran
Fix Version/s: 3.1.2
   3.2.0
   Status: Resolved  (was: Patch Available)

thanks, committed to trunk and then picked into 3.1 and 3.2

> Review + update cloud store sensitive keys in 
> hadoop.security.sensitive-config-keys
> ---
>
> Key: HADOOP-15839
> URL: https://issues.apache.org/jira/browse/HADOOP-15839
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HADOOP-15839-001.patch
>
>
> Make sure that {{hadoop.security.sensitive-config-keys}} is up to date with 
> all cloud store options, including
> h3. s3a:
> * s3a per-bucket secrets
> * s3a session tokens
> h3: abfs
> * {{fs.azure.account.oauth2.client.secret}}
> h3. adls
> fs.adl.oauth2.credential
> fs.adl.oauth2.refresh.token



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15839) Review + update cloud store sensitive keys in hadoop.security.sensitive-config-keys

2018-10-10 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645186#comment-16645186
 ] 

Steve Loughran commented on HADOOP-15839:
-

(OK, I see i've put this in without yarn commenting. If yetus complains I'll 
roll back)

> Review + update cloud store sensitive keys in 
> hadoop.security.sensitive-config-keys
> ---
>
> Key: HADOOP-15839
> URL: https://issues.apache.org/jira/browse/HADOOP-15839
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HADOOP-15839-001.patch
>
>
> Make sure that {{hadoop.security.sensitive-config-keys}} is up to date with 
> all cloud store options, including
> h3. s3a:
> * s3a per-bucket secrets
> * s3a session tokens
> h3: abfs
> * {{fs.azure.account.oauth2.client.secret}}
> h3. adls
> fs.adl.oauth2.credential
> fs.adl.oauth2.refresh.token



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15835) Reuse Object Mapper in KMSJSONWriter

2018-10-10 Thread Jonathan Eagles (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-15835:
-
   Resolution: Fixed
Fix Version/s: 2.8.6
   3.0.4
   2.9.2
   Status: Resolved  (was: Patch Available)

> Reuse Object Mapper in KMSJSONWriter
> 
>
> Key: HADOOP-15835
> URL: https://issues.apache.org/jira/browse/HADOOP-15835
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Fix For: 2.9.2, 3.0.4, 2.8.6
>
> Attachments: HADOOP-15835.001-branch-2.9.patch, 
> HADOOP-15835.001-branch-3.0.patch
>
>
> In lieu of HADOOP-15550 in branch-3.0, branch-2.9, branch-2.8. This patch 
> will provide some benefit of MapperObject reuse though not as complete as the 
> JsonSerialization util lazy loading fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60

2018-10-10 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645157#comment-16645157
 ] 

Steve Loughran commented on HADOOP-15832:
-

if we're going to ship bc jars, that's going to require some crypto export 
paperwork in the release notes.

> Upgrade BouncyCastle to 1.60
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15832.001.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15835) Reuse Object Mapper in KMSJSONWriter

2018-10-10 Thread Jonathan Eagles (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645117#comment-16645117
 ] 

Jonathan Eagles commented on HADOOP-15835:
--

Thanks for the review, [~xiaochen]. Committing this patch to branch-3.0, 
branch-2.9, branch-2.8

> Reuse Object Mapper in KMSJSONWriter
> 
>
> Key: HADOOP-15835
> URL: https://issues.apache.org/jira/browse/HADOOP-15835
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Attachments: HADOOP-15835.001-branch-2.9.patch, 
> HADOOP-15835.001-branch-3.0.patch
>
>
> In lieu of HADOOP-15550 in branch-3.0, branch-2.9, branch-2.8. This patch 
> will provide some benefit of MapperObject reuse though not as complete as the 
> JsonSerialization util lazy loading fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15839) Review + update cloud store sensitive keys in hadoop.security.sensitive-config-keys

2018-10-10 Thread Larry McCay (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645068#comment-16645068
 ] 

Larry McCay commented on HADOOP-15839:
--

[~ste...@apache.org] - this LGTM...

+1

> Review + update cloud store sensitive keys in 
> hadoop.security.sensitive-config-keys
> ---
>
> Key: HADOOP-15839
> URL: https://issues.apache.org/jira/browse/HADOOP-15839
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15839-001.patch
>
>
> Make sure that {{hadoop.security.sensitive-config-keys}} is up to date with 
> all cloud store options, including
> h3. s3a:
> * s3a per-bucket secrets
> * s3a session tokens
> h3: abfs
> * {{fs.azure.account.oauth2.client.secret}}
> h3. adls
> fs.adl.oauth2.credential
> fs.adl.oauth2.refresh.token



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-10-10 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645064#comment-16645064
 ] 

Steve Loughran commented on HADOOP-15819:
-

maybe ask the fs itself, e,g

{code}
if (fs!=null) {
fs.getCacheDisabled = fs.getConfiguration.getBoolean()

|
{code}
Anyway, I think the "allow tests to explicitly declare when they want closing" 
is better

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(Abstract

[jira] [Updated] (HADOOP-15839) Review + update cloud store sensitive keys in hadoop.security.sensitive-config-keys

2018-10-10 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15839:

Target Version/s: 3.3.0
  Status: Patch Available  (was: Open)

> Review + update cloud store sensitive keys in 
> hadoop.security.sensitive-config-keys
> ---
>
> Key: HADOOP-15839
> URL: https://issues.apache.org/jira/browse/HADOOP-15839
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15839-001.patch
>
>
> Make sure that {{hadoop.security.sensitive-config-keys}} is up to date with 
> all cloud store options, including
> h3. s3a:
> * s3a per-bucket secrets
> * s3a session tokens
> h3: abfs
> * {{fs.azure.account.oauth2.client.secret}}
> h3. adls
> fs.adl.oauth2.credential
> fs.adl.oauth2.refresh.token



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15839) Review + update cloud store sensitive keys in hadoop.security.sensitive-config-keys

2018-10-10 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15839:

Attachment: HADOOP-15839-001.patch

> Review + update cloud store sensitive keys in 
> hadoop.security.sensitive-config-keys
> ---
>
> Key: HADOOP-15839
> URL: https://issues.apache.org/jira/browse/HADOOP-15839
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15839-001.patch
>
>
> Make sure that {{hadoop.security.sensitive-config-keys}} is up to date with 
> all cloud store options, including
> h3. s3a:
> * s3a per-bucket secrets
> * s3a session tokens
> h3: abfs
> * {{fs.azure.account.oauth2.client.secret}}
> h3. adls
> fs.adl.oauth2.credential
> fs.adl.oauth2.refresh.token



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15839) Review + update cloud store sensitive keys in hadoop.security.sensitive-config-keys

2018-10-10 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15839:
---

 Summary: Review + update cloud store sensitive keys in 
hadoop.security.sensitive-config-keys
 Key: HADOOP-15839
 URL: https://issues.apache.org/jira/browse/HADOOP-15839
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: conf
Affects Versions: 3.2.0
Reporter: Steve Loughran


Make sure that {{hadoop.security.sensitive-config-keys}} is up to date with all 
cloud store options, including

h3. s3a:
* s3a per-bucket secrets
* s3a session tokens

h3: abfs
* {{fs.azure.account.oauth2.client.secret}}

h3. adls
fs.adl.oauth2.credential
fs.adl.oauth2.refresh.token




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15820) ZStandardDecompressor native code sets an integer field as a long

2018-10-10 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645035#comment-16645035
 ] 

Jason Lowe commented on HADOOP-15820:
-

Thanks, [~jojochuang]!  Sorry for missing that commit.

> ZStandardDecompressor native code sets an integer field as a long
> -
>
> Key: HADOOP-15820
> URL: https://issues.apache.org/jira/browse/HADOOP-15820
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Blocker
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15820.001.patch
>
>
> Java_org_apache_hadoop_io_compress_zstd_ZStandardDecompressor_init in 
> ZStandardDecompressor.c sets the {{remaining}} field as a long when it 
> actually is an integer.
> Kudos to Ben Lau from our HBase team for discovering this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15837) DynamoDB table Update can fail S3A FS init

2018-10-10 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645031#comment-16645031
 ] 

Steve Loughran commented on HADOOP-15837:
-

Patch 002; fix checkstyle. 

Tested S3 ireland with " -Ds3guard -Ddynamodb"; failures in unrelated issues as 
discussed and covered elsewhere

* this patch is ready for review
* I plan to backport to 2.10-3.9 the change to the case statement as that 
addresses the key issue of capacity change while S3Guard is in use, which 
autoscale will implicitly do in high-load situations (i.e. use S3Guard heavily, 
DDB triggers scale up, S3Guard fails)

+ [~mackrorysd] : can you look @ this. It's a serious issue which will surface 
in the wild

Errors
{code}
[ERROR] Errors: 
[ERROR]   
ITestS3GuardConcurrentOps.testConcurrentTableCreations:166->deleteTable:77 »  
...
[ERROR]   ITestS3GuardToolDynamoDB.testDynamoDBInitDestroyCycle:315 » 
ResourceInUse Atte...
[ERROR]   
ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testSetCapacityFailFastIfNotGuarded:330->AbstractS3GuardToolTestBase.lambda$testSetCapacityFailFastIfNotGuarded$2:331->AbstractS3GuardToolTestBase.run:115
 » FileNotFound
{code}

> DynamoDB table Update can fail S3A FS init
> --
>
> Key: HADOOP-15837
> URL: https://issues.apache.org/jira/browse/HADOOP-15837
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: s3guard test with small capacity (10) but autoscale 
> enabled & multiple consecutive parallel test runs executed...this seems to 
> have been enough load to trigger the state change
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15837-001.patch, HADOOP-15837-002.patch
>
>
> When DDB autoscales a table, it goes into an UPDATING state. The 
> waitForTableActive operation in the AWS SDK doesn't seem to wait long enough 
> for this to recover. We need to catch & retry



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-10-10 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644973#comment-16644973
 ] 

Gabor Bota edited comment on HADOOP-15819 at 10/10/18 2:19 PM:
---

I have some bad news: I get these issues for tests where 
{{FS_S3A_IMPL_DISABLE_CACHE}} set to true, so the caching _should be_ disabled.

What I did is I modified the 
{{org.apache.hadoop.fs.s3a.AbstractS3ATestBase#teardown}} to

{code:java}
  @Override
  public void teardown() throws Exception {
super.teardown();
boolean fsCacheDisabled = getConfiguration()
.getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false);
if(fsCacheDisabled){
  describe("closing file system");
  LOG.warn("Closing fs. FS_S3A_IMPL_DISABLE_CACHE: " + fsCacheDisabled);
  IOUtils.closeStream(getFileSystem());
}
  }
{code}

And there were still issues after this.


was (Author: gabor.bota):
I have some bad news: I get these issues for tests where 
{{FS_S3A_IMPL_DISABLE_CACHE}} set to true, so the caching _should be_ disabled.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of th

[jira] [Updated] (HADOOP-15837) DynamoDB table Update can fail S3A FS init

2018-10-10 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15837:

Attachment: HADOOP-15837-002.patch

> DynamoDB table Update can fail S3A FS init
> --
>
> Key: HADOOP-15837
> URL: https://issues.apache.org/jira/browse/HADOOP-15837
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: s3guard test with small capacity (10) but autoscale 
> enabled & multiple consecutive parallel test runs executed...this seems to 
> have been enough load to trigger the state change
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15837-001.patch, HADOOP-15837-002.patch
>
>
> When DDB autoscales a table, it goes into an UPDATING state. The 
> waitForTableActive operation in the AWS SDK doesn't seem to wait long enough 
> for this to recover. We need to catch & retry



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15837) DynamoDB table Update can fail S3A FS init

2018-10-10 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15837:

Status: Open  (was: Patch Available)

> DynamoDB table Update can fail S3A FS init
> --
>
> Key: HADOOP-15837
> URL: https://issues.apache.org/jira/browse/HADOOP-15837
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: s3guard test with small capacity (10) but autoscale 
> enabled & multiple consecutive parallel test runs executed...this seems to 
> have been enough load to trigger the state change
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15837-001.patch
>
>
> When DDB autoscales a table, it goes into an UPDATING state. The 
> waitForTableActive operation in the AWS SDK doesn't seem to wait long enough 
> for this to recover. We need to catch & retry



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645010#comment-16645010
 ] 

Dinesh Chitlangia commented on HADOOP-15785:


[~tasanuma0829] san - Thank you for the review and feedback. Attached patch 003 
that addresses review comments.

For *QuotaUsage*, I changed it to:
{code:java}
/**
 * Output format:
 * |12| |15| |15| |15| |---18---|
 *QUOTA   REMAINING_QUOTA SPACE_QUOTA SPACE_QUOTA_REM FILE_NAME
 */
{code}

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-15785.001.patch, HADOOP-15785.002.patch, 
> HADOOP-15785.003.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-10-10 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644940#comment-16644940
 ] 

Steve Loughran edited comment on HADOOP-15819 at 10/10/18 2:08 PM:
---

bq. The FS cache really feels inherently broken in the parallel tests case, 
which is why I initially liked the idea of disabling caching for the tests.

parallel tests run in their own JVMs,  the main issue there is that you need to 
be confident that tests aren't writing to the same local/remote paths.

At the same time, I've seen old test instances get recycled, which makes me 
thing that the parallel runner fields work out to instantiated test runners as 
they complete individual test cases. So reuse does happen, it just happens a 
lot more in sequential runs.

Tests which want special configs of the FS can't handle recycled classes, hence 
the need for  nee filesystems & close after, but I don't see why other tests 
should be closing much.

* HADOOP-13131 added the close, along with 
{{S3ATestUtils.createTestFilesystem()}}, which does create filesystems that 
need to be closed.
* But I don't see that creation happening much, especially given 
{{S3AContract}}'s FS is just from a get().
* Many tests call {{S3ATestUtils.disableFilesystemCaching(conf)}} before 
FileSystem.get, which guarantees unique instances

Which makes me think: yes, closing the FS in teardown is overkill except in the 
special case of "creates a new filesystem() either explicilty or implicitly.

As [~mackrorysd] says: surprising this hasn't surfaced before. But to fix it 
means that it should be done properly.

* IF tests were always closed and new ones created (i.e. no caching), test 
cases run way, way faster. 
* those tests which do need their own FS instance can close it in teardown, and 
set it up.
* And those tests which absolutely must have FS.get() Return their specific 
filesystem must: (a) enable caching and (b) remove their FS from the cache in 
teardown (e.g. FileSystem.closeAll)

This is probably going to force a review of all the tests, maybe have some 
method in AbstractS3ATestBase

{code}
protected boolean uniqueFilesystemInstance() { return false; }
{code}

then 
# if true, in createConfiguration() call {{disableFilesystemCaching}}
# if true in teardown: close the FS.

Next:
*  go through all uses of the {{disableFilesystemCaching}}, and in those tests 
have {{uniqueFilesystemInstance}} return true. 
* Look at uses of  {{S3ATestUtils.createTestFilesystem()}} & make sure they are 
closing this after

This is going to span all the tests. Joy






was (Author: ste...@apache.org):
bq. The FS cache really feels inherently broken in the parallel tests case, 
which is why I initially liked the idea of disabling caching for the tests.

parallel tests run in their own JVMs,  the main issue there is that you need to 
be confident that tests aren't writing to the same local/remote paths.

At the same time, I've seen old test instances get recycled, which makes me 
thing that the parallel runner fields work out to instantiated test runners as 
they complete individual test cases. So reuse does happen, it just happens a 
lot more in sequential runs.

Tests which want special configs of the FS can't handle recycled classes, hence 
the need for  nee filesystems & close after, but I don't see why other tests 
should be closing much.

* HADOOP-13131 added the close, along with 
{{S3ATestUtils.createTestFilesystem()}}, which does create filesystems that 
need to be closed.
* But I don't see that creation happening much, especially given 
{{S3AContract}}'s FS is just from a get().
* Many tests call {{S3ATestUtils.disableFilesystemCaching(conf)}} before 
FileSystem.get, which guarantees unique instances

Which makes me think: yes, closing the FS in teardown is overkill except in the 
special case of "creates a new filesystem() either explicilty or implicitly.

As [~mackrorysd] says: surprising this hasn't surfaced before. But to fix it 
means that it should be done properly.

* IF tests were always closed and new ones created (i.e. no catching), test 
cases run way, way faster. 
* those tests which do need their own FS instance can close it in teardown, and 
set it up.
* And those tests which absolutely must have FS.get() Return their specific 
filesystem must: (a) enable caching and (b) remove their FS from the cache in 
teardown (e.g. FileSystem.closeAll)

This is probably going to force a review of all the tests, maybe have some 
method in AbstractS3ATestBase

{code}
protected boolean uniqueFilesystemInstance() { return false; }
{code}

then 
# if true, in createConfiguration() call {{disableFilesystemCaching))
# if true in teardown: close the FS.

Next:
*  go through all uses of the {{disableFilesystemCaching)), and in those tests 
have {{uniqueFilesystemInstance}} return true. 
* Look at uses of  {{S3ATestUtils.createTestF

[jira] [Updated] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-15785:
---
Attachment: HADOOP-15785.003.patch
Status: Patch Available  (was: Open)

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-15785.001.patch, HADOOP-15785.002.patch, 
> HADOOP-15785.003.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-15785:
---
Status: Open  (was: Patch Available)

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-15785.001.patch, HADOOP-15785.002.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15820) ZStandardDecompressor native code sets an integer field as a long

2018-10-10 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645000#comment-16645000
 ] 

Wei-Chiu Chuang commented on HADOOP-15820:
--

branch-3.2 was branched out from trunk a few days ago and the commit didn't get 
cherry picked into that branch. I've just pushed the commit to branch-3.2. 
(commit hash 5f97c0cd7657cdeb6196b6f2c83e44990044f52f)

> ZStandardDecompressor native code sets an integer field as a long
> -
>
> Key: HADOOP-15820
> URL: https://issues.apache.org/jira/browse/HADOOP-15820
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Blocker
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15820.001.patch
>
>
> Java_org_apache_hadoop_io_compress_zstd_ZStandardDecompressor_init in 
> ZStandardDecompressor.c sets the {{remaining}} field as a long when it 
> actually is an integer.
> Kudos to Ben Lau from our HBase team for discovering this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-10-10 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644973#comment-16644973
 ] 

Gabor Bota edited comment on HADOOP-15819 at 10/10/18 1:47 PM:
---

I have some bad news: I get these issues for tests where 
{{FS_S3A_IMPL_DISABLE_CACHE}} set to true, so the caching _should be_ disabled.


was (Author: gabor.bota):
I have some bad news: I get these issues for tests where 
{{FS_S3A_IMPL_DISABLE_CACHE}} set to true, so the caching is disabled.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase

[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-10-10 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644973#comment-16644973
 ] 

Gabor Bota commented on HADOOP-15819:
-

I have some bad news: I get these issues for tests where 
{{FS_S3A_IMPL_DISABLE_CACHE}} set to true, so the caching is disabled.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AClosedFS.se

[jira] [Updated] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-10-10 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15679:

Target Version/s: 3.0.3, 2.10.0, 2.9.2, 2.8.5  (was: 2.10.0, 2.9.2, 3.0.3, 
2.8.5)
  Status: Patch Available  (was: Open)

resubmitting patch 004 to get it through yetus again, will commit if i am happy

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch, 
> HADOOP-15679-branch-2-001.patch, HADOOP-15679-branch-2-001.patch, 
> HADOOP-15679-branch-2-003.patch, HADOOP-15679-branch-2-003.patch, 
> HADOOP-15679-branch-2-004.patch, HADOOP-15679-branch-2-004.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-10-10 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15679:

Attachment: HADOOP-15679-branch-2-004.patch

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch, 
> HADOOP-15679-branch-2-001.patch, HADOOP-15679-branch-2-001.patch, 
> HADOOP-15679-branch-2-003.patch, HADOOP-15679-branch-2-003.patch, 
> HADOOP-15679-branch-2-004.patch, HADOOP-15679-branch-2-004.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-10-10 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15679:

Target Version/s: 3.0.3, 2.10.0, 2.9.2, 2.8.5  (was: 2.10.0, 2.9.2, 3.0.3, 
2.8.5)
  Status: Open  (was: Patch Available)

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch, 
> HADOOP-15679-branch-2-001.patch, HADOOP-15679-branch-2-001.patch, 
> HADOOP-15679-branch-2-003.patch, HADOOP-15679-branch-2-003.patch, 
> HADOOP-15679-branch-2-004.patch, HADOOP-15679-branch-2-004.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-10-10 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644940#comment-16644940
 ] 

Steve Loughran commented on HADOOP-15819:
-

bq. The FS cache really feels inherently broken in the parallel tests case, 
which is why I initially liked the idea of disabling caching for the tests.

parallel tests run in their own JVMs,  the main issue there is that you need to 
be confident that tests aren't writing to the same local/remote paths.

At the same time, I've seen old test instances get recycled, which makes me 
thing that the parallel runner fields work out to instantiated test runners as 
they complete individual test cases. So reuse does happen, it just happens a 
lot more in sequential runs.

Tests which want special configs of the FS can't handle recycled classes, hence 
the need for  nee filesystems & close after, but I don't see why other tests 
should be closing much.

* HADOOP-13131 added the close, along with 
{{S3ATestUtils.createTestFilesystem()}}, which does create filesystems that 
need to be closed.
* But I don't see that creation happening much, especially given 
{{S3AContract}}'s FS is just from a get().
* Many tests call {{S3ATestUtils.disableFilesystemCaching(conf)}} before 
FileSystem.get, which guarantees unique instances

Which makes me think: yes, closing the FS in teardown is overkill except in the 
special case of "creates a new filesystem() either explicilty or implicitly.

As [~mackrorysd] says: surprising this hasn't surfaced before. But to fix it 
means that it should be done properly.

* IF tests were always closed and new ones created (i.e. no catching), test 
cases run way, way faster. 
* those tests which do need their own FS instance can close it in teardown, and 
set it up.
* And those tests which absolutely must have FS.get() Return their specific 
filesystem must: (a) enable caching and (b) remove their FS from the cache in 
teardown (e.g. FileSystem.closeAll)

This is probably going to force a review of all the tests, maybe have some 
method in AbstractS3ATestBase

{code}
protected boolean uniqueFilesystemInstance() { return false; }
{code}

then 
# if true, in createConfiguration() call {{disableFilesystemCaching))
# if true in teardown: close the FS.

Next:
*  go through all uses of the {{disableFilesystemCaching)), and in those tests 
have {{uniqueFilesystemInstance}} return true. 
* Look at uses of  {{S3ATestUtils.createTestFilesystem()}} & make sure they are 
closing this after

This is going to span all the tests. Joy





> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMag

[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-10-10 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644730#comment-16644730
 ] 

Gabor Bota commented on HADOOP-15819:
-

I think we should focus on solving this case first, so it would be better not 
to talk about parallel test cases.
The FS cache itself could be broken when running parallel tests, but I think it 
was never prepared to for parallel test run, and that change should be 
addressed in another issue since it's an entirely different topic. If we don't 
run parallel tests we still have this problem.

The issue I see right now is that we close the filesystem after each test and I 
was not able to find another class that does this. I think it's very specific 
to AbstractS3ATestBase (so S3A) and it's 33 implementations. 
{{org.apache.hadoop.fs.s3a.AbstractS3ATestBase#teardown}} is called after each 
test, because the superclass annotation is {{@After}} on the teardown.

With the current FS cache implementation, another test can get the same FS 
instance that the previous test closed. It can even happen if the tests running 
are from the *same class*.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach 

[jira] [Commented] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-10 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644648#comment-16644648
 ] 

Ewan Higgs commented on HADOOP-15826:
-

Good catch. revertCommit should also be fixed to have {{@OnceTranslated}}:

{code}
  @Retries.RetryTranslated
  public void revertCommit(String destKey) throws IOException {
once("revert commit", destKey,
() -> {
  Path destPath = owner.keyToQualifiedPath(destKey);
  owner.deleteObjectAtPath(destPath,
  destKey, true);
  owner.maybeCreateFakeParentDirectory(destPath);
}
);
  }
{code}

> @Retries annotation of putObject() call & uses wrong
> 
>
> Key: HADOOP-15826
> URL: https://issues.apache.org/jira/browse/HADOOP-15826
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15826-001.patch
>
>
> The retry annotations of the S3AFilesystem putObject call and its 
> writeOperationsHelper use aren't in sync with what the code does.
> Fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org