[jira] [Commented] (HADOOP-15993) Upgrade Kafka version in hadoop-kafka module

2020-01-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011515#comment-17011515
 ] 

Hudson commented on HADOOP-15993:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17838 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17838/])
HADOOP-15993. Upgrade Kafka to 2.4.0 in hadoop-kafka module. (#1796) (tasanuma: 
rev a40dc9ee315222713ef6fce5c14a91a2fcd7a245)
* (edit) hadoop-project/pom.xml
* (edit) LICENSE-binary
* (add) licenses-binary/LICENSE-zstd-jni.txt


> Upgrade Kafka version in hadoop-kafka module
> 
>
> Key: HADOOP-15993
> URL: https://issues.apache.org/jira/browse/HADOOP-15993
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Now the version is 0.8.2.1 and it has net.jpountz.lz4:lz4:1.2.0 dependency, 
> which is vulnerable. 
> (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4611)
> Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15993) Upgrade Kafka version in hadoop-kafka module

2020-01-08 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15993:
--
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged to trunk. Thanks for the PR, [~aajisaka]!

> Upgrade Kafka version in hadoop-kafka module
> 
>
> Key: HADOOP-15993
> URL: https://issues.apache.org/jira/browse/HADOOP-15993
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Now the version is 0.8.2.1 and it has net.jpountz.lz4:lz4:1.2.0 dependency, 
> which is vulnerable. 
> (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4611)
> Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma merged pull request #1796: HADOOP-15993. Upgrade Kafka to 2.4.0 in hadoop-kafka module.

2020-01-08 Thread GitBox
tasanuma merged pull request #1796: HADOOP-15993. Upgrade Kafka to 2.4.0 in 
hadoop-kafka module.
URL: https://github.com/apache/hadoop/pull/1796
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-08 Thread Thomas Marqardt (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011497#comment-17011497
 ] 

Thomas Marqardt commented on HADOOP-16785:
--

I’m not setup to build/test at the moment, so could you use two streams writing 
to the same file and verify whether or not there is actually another issue?  I 
hope FilterOutputStream.close calls AbfsOutputStream.close and there is not an 
issue. 

> Improve wasb and abfs resilience on double close() calls
> 
>
> Key: HADOOP-16785
> URL: https://issues.apache.org/jira/browse/HADOOP-16785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.2
>
>
> # if you call write() after the NativeAzureFsOutputStream is closed it throws 
> an NPE ... which isn't always caught by closeQuietly code. It needs to raise 
> an IOE
> # abfs close ops can trigger failures in try-with-resources use



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13144) Enhancing IPC client throughput via multiple connections per user

2020-01-08 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011481#comment-17011481
 ] 

Janus Chow commented on HADOOP-13144:
-

In the latest patch, I create a new package of "org.apahce.hadoop.ipc" under 
hadoop-hdfs-rbf project for FederationConnectionId, I don't know if this is 
apache-way or not.

> Enhancing IPC client throughput via multiple connections per user
> -
>
> Key: HADOOP-13144
> URL: https://issues.apache.org/jira/browse/HADOOP-13144
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Jason Kace
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HADOOP-13144-branch-2.9.001.patch, 
> HADOOP-13144-branch-2.9.002.patch, HADOOP-13144-branch-2.9.003.patch, 
> HADOOP-13144-branch-2.9.004.patch, HADOOP-13144-performance.patch, 
> HADOOP-13144.000.patch, HADOOP-13144.001.patch, HADOOP-13144.002.patch, 
> HADOOP-13144.003.patch
>
>
> The generic IPC client ({{org.apache.hadoop.ipc.Client}}) utilizes a single 
> connection thread for each {{ConnectionId}}.  The {{ConnectionId}} is unique 
> to the connection's remote address, ticket and protocol.  Each ConnectionId 
> is 1:1 mapped to a connection thread by the client via a map cache.
> The result is to serialize all IPC read/write activity through a single 
> thread for a each user/ticket + address.  If a single user makes repeated 
> calls (1k-100k/sec) to the same destination, the IPC client becomes a 
> bottleneck.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #1758: HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem objec…

2020-01-08 Thread GitBox
jojochuang commented on a change in pull request #1758: HDFS-15052. WebHDFS 
getTrashRoot leads to OOM due to FileSystem objec…
URL: https://github.com/apache/hadoop/pull/1758#discussion_r364554258
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
 ##
 @@ -1345,11 +1348,21 @@ protected Response get(
 }
   }
 
-  private static String getTrashRoot(String fullPath,
-  Configuration conf) throws IOException {
-FileSystem fs = FileSystem.get(conf != null ? conf : new Configuration());
-return fs.getTrashRoot(
-new org.apache.hadoop.fs.Path(fullPath)).toUri().getPath();
+  private String getTrashRoot(String fullPath) throws IOException {
+String user = UserGroupInformation.getCurrentUser().getShortUserName();
+org.apache.hadoop.fs.Path path = new org.apache.hadoop.fs.Path(fullPath);
+String parentSrc = path.isRoot() ?
+path.toUri().getPath() : path.getParent().toUri().getPath();
+EncryptionZone ez = getRpcClientProtocol().getEZForPath(parentSrc);
+org.apache.hadoop.fs.Path trashRoot;
+if (ez != null) {
+  trashRoot = new org.apache.hadoop.fs.Path(
+  new org.apache.hadoop.fs.Path(ez.getPath(), TRASH_PREFIX), user);
+} else {
+  trashRoot = new org.apache.hadoop.fs.Path(
+  new org.apache.hadoop.fs.Path(USER_HOME_PREFIX, user), TRASH_PREFIX);
+}
+return trashRoot.toUri().getPath();
 
 Review comment:
   This change assumes namenode's default fs is a DistributedFileSystem, which 
makes sense.
   The change basically copies the implementation inside 
DistributedFileSystem#getTrashRoot(). If that method evolves in the future, the 
same changes should be applied in here too. 
   
   Can we add a comment inside DistributedFileSystem#getTrashRoot(), that any 
change should be added in NamenodeWebHdfsMethods#getTrashRoot() too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16792) Let s3 clients configure request timeout

2020-01-08 Thread Aaron Fabbri (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011401#comment-17011401
 ] 

Aaron Fabbri commented on HADOOP-16792:
---

Hi [~mustafaiman] . All the site docs live here: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/

You can see these published at apache.org 
[here|https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html].
 Also note the defaults and descriptions in 
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

> Let s3 clients configure request timeout
> 
>
> Key: HADOOP-16792
> URL: https://issues.apache.org/jira/browse/HADOOP-16792
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>
> S3 does not guarantee latency. Every once in a while a request may straggle 
> and drive latency up for the greater procedure. In these cases, simply 
> timing-out the individual request is beneficial so that the client 
> application can retry. The retry tends to complete faster than the original 
> straggling request most of the time. Others experienced this issue too: 
> [https://arxiv.org/pdf/1911.11727.pdf] .
> S3 configuration already provides timeout facility via 
> `ClientConfiguration#setTimeout`. Exposing this configuration is beneficial 
> for latency sensitive applications. S3 client configuration is shared with 
> DynamoDB client which is also affected from unreliable worst case latency.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16793) Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011323#comment-17011323
 ] 

Hadoop QA commented on HADOOP-16793:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HADOOP-16793 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990157/HADOOP-16793.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6f8a21ceceb5 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 
10:36:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b1e07d2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16732/testReport/ |
| Max. process+thread count | 1614 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16732/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Remove WARN log

[jira] [Updated] (HADOOP-16793) Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-08 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16793:
-
Status: Patch Available  (was: Open)

> Remove WARN log when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> ---
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16590) IBM Java has deprecated OS login module classes and OS principal classes.

2020-01-08 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011064#comment-17011064
 ] 

Eric Yang commented on HADOOP-16590:


Target this for 3.3.0 release.

> IBM Java has deprecated OS login module classes and OS principal classes.
> -
>
> Key: HADOOP-16590
> URL: https://issues.apache.org/jira/browse/HADOOP-16590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Nicholas Marion
>Priority: Major
>
> When building applications that rely on hadoop-commons and using IBM Java, 
> errors such as `{{Exception in thread "main" java.io.IOException: failure to 
> login}}` and `{{Unable to find JAAS 
> classes:com.ibm.security.auth.LinuxPrincipal}}` can be seen.
> IBM Java has deprecated the following OS Login Module classes:
> {code:java}
> com.ibm.security.auth.module.Win64LoginModule
> com.ibm.security.auth.module.NTLoginModule
> com.ibm.security.auth.module.AIX64LoginModule
> com.ibm.security.auth.module.AIXLoginModule
> com.ibm.security.auth.module.LinuxLoginModule
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.module.JAASLoginModule{code}
> IBM Java has deprecated the following OS Principal classes:
>  
> {code:java}
> com.ibm.security.auth.UsernamePrincipal
> com.ibm.security.auth.NTUserPrincipal
> com.ibm.security.auth.AIXPrincipal
> com.ibm.security.auth.LinuxPrincipal
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.UsernamePrincipal{code}
> Older issue HADOOP-15765 has same issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16590) IBM Java has deprecated OS login module classes and OS principal classes.

2020-01-08 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011062#comment-17011062
 ] 

Eric Yang commented on HADOOP-16590:


[~nmarion] Thank you for the patch.  +1 on the patch.  I think this change is a 
good solution for IBM JDK.  I doubt anyone is running 32bit IBM JDK with 
Hadoop.  The shaded plugin failure seems to be caused by running this command 
in pre-commit test:

{code}mvn --batch-mode verify -fae --batch-mode -am -pl 
hadoop-client-modules/hadoop-client-check-invariants -pl 
hadoop-client-modules/hadoop-client-check-test-invariants -pl 
hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
-Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true{code}

The failure does not appear to be related to this patch.

{code}
[INFO] --- exec-maven-plugin:1.3.1:exec (check-jar-contents) @ 
hadoop-client-check-test-invariants ---
[ERROR] Found artifact with unexpected contents: 
'/home/eyang/test/hadoop/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.3.0-SNAPSHOT.jar'
Please check the following and either correct the build or update
the allowed list with reasoning.

hdfs-default.xml.orig
{code}

I will commit this patch, if there is no objections.

> IBM Java has deprecated OS login module classes and OS principal classes.
> -
>
> Key: HADOOP-16590
> URL: https://issues.apache.org/jira/browse/HADOOP-16590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Nicholas Marion
>Priority: Major
>
> When building applications that rely on hadoop-commons and using IBM Java, 
> errors such as `{{Exception in thread "main" java.io.IOException: failure to 
> login}}` and `{{Unable to find JAAS 
> classes:com.ibm.security.auth.LinuxPrincipal}}` can be seen.
> IBM Java has deprecated the following OS Login Module classes:
> {code:java}
> com.ibm.security.auth.module.Win64LoginModule
> com.ibm.security.auth.module.NTLoginModule
> com.ibm.security.auth.module.AIX64LoginModule
> com.ibm.security.auth.module.AIXLoginModule
> com.ibm.security.auth.module.LinuxLoginModule
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.module.JAASLoginModule{code}
> IBM Java has deprecated the following OS Principal classes:
>  
> {code:java}
> com.ibm.security.auth.UsernamePrincipal
> com.ibm.security.auth.NTUserPrincipal
> com.ibm.security.auth.AIXPrincipal
> com.ibm.security.auth.LinuxPrincipal
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.UsernamePrincipal{code}
> Older issue HADOOP-15765 has same issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16590) IBM Java has deprecated OS login module classes and OS principal classes.

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16590:
---
Target Version/s: 3.3.0

> IBM Java has deprecated OS login module classes and OS principal classes.
> -
>
> Key: HADOOP-16590
> URL: https://issues.apache.org/jira/browse/HADOOP-16590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Nicholas Marion
>Priority: Major
>
> When building applications that rely on hadoop-commons and using IBM Java, 
> errors such as `{{Exception in thread "main" java.io.IOException: failure to 
> login}}` and `{{Unable to find JAAS 
> classes:com.ibm.security.auth.LinuxPrincipal}}` can be seen.
> IBM Java has deprecated the following OS Login Module classes:
> {code:java}
> com.ibm.security.auth.module.Win64LoginModule
> com.ibm.security.auth.module.NTLoginModule
> com.ibm.security.auth.module.AIX64LoginModule
> com.ibm.security.auth.module.AIXLoginModule
> com.ibm.security.auth.module.LinuxLoginModule
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.module.JAASLoginModule{code}
> IBM Java has deprecated the following OS Principal classes:
>  
> {code:java}
> com.ibm.security.auth.UsernamePrincipal
> com.ibm.security.auth.NTUserPrincipal
> com.ibm.security.auth.AIXPrincipal
> com.ibm.security.auth.LinuxPrincipal
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.UsernamePrincipal{code}
> Older issue HADOOP-15765 has same issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16792) Let s3 clients configure request timeout

2020-01-08 Thread Mustafa Iman (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010957#comment-17010957
 ] 

Mustafa Iman commented on HADOOP-16792:
---

[~ste...@apache.org]

I could not figure out how the other s3 integration tests ran. I wanted to try 
if precommit tests magically ran the tests. Thanks for pointing to 
testings3a.md. I could not discover that before sending the patch. I'll do the 
manual testing shortly.

I dont think ITestS3AHugeFilesDiskBlocks is in the scope of this patch as I am 
not making any change to default values. This patch just exposes an sdk 
property to the clients of hadoop and the default value is still the same old 
value. I understand the concern with copying large files caused by setting this 
config to a too small value. If I understand your concern correctly, it is an 
operational problem. The possibility of someone setting this config to an 
inappropriate value without understanding the effects should not prevent us 
from making it configurable.

I'd appreciate if you could point me to appropriate documentation to update.

> Let s3 clients configure request timeout
> 
>
> Key: HADOOP-16792
> URL: https://issues.apache.org/jira/browse/HADOOP-16792
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>
> S3 does not guarantee latency. Every once in a while a request may straggle 
> and drive latency up for the greater procedure. In these cases, simply 
> timing-out the individual request is beneficial so that the client 
> application can retry. The retry tends to complete faster than the original 
> straggling request most of the time. Others experienced this issue too: 
> [https://arxiv.org/pdf/1911.11727.pdf] .
> S3 configuration already provides timeout facility via 
> `ClientConfiguration#setTimeout`. Exposing this configuration is beneficial 
> for latency sensitive applications. S3 client configuration is shared with 
> DynamoDB client which is also affected from unreliable worst case latency.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1761: HADOOP-16759. Filesystem openFile() builder to take a FileStatus param

2020-01-08 Thread GitBox
steveloughran commented on a change in pull request #1761: HADOOP-16759. 
Filesystem openFile() builder to take a FileStatus param
URL: https://github.com/apache/hadoop/pull/1761#discussion_r364365595
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/OpenFileParameters.java
 ##
 @@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+
+/**
+ * All the parameters from the openFile builder for the {@code 
openFileWithOptions} commands.
+ *
+ * If/when new attributes added to the builder, this class will be extended.
+ */
+public class OpenFileParameters {
 
 Review comment:
   gone half way with this; renamed set* to with* and return the same object 
for ease of chaining


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1761: HADOOP-16759. Filesystem openFile() builder to take a FileStatus param

2020-01-08 Thread GitBox
steveloughran commented on a change in pull request #1761: HADOOP-16759. 
Filesystem openFile() builder to take a FileStatus param
URL: https://github.com/apache/hadoop/pull/1761#discussion_r364361215
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ARemoteFileChanged.java
 ##
 @@ -432,6 +433,42 @@ public void testReadFileChangedOutOfSyncMetadata() throws 
Throwable {
 }
   }
 
+  /**
+   * Verifies that when the openFile builder is passed in a status,
+   * then that is used to eliminate the getFileStatus call in open();
+   * thus the version and etag passed down are still used.
+   */
+  @Test
+  public void testOpenFileWithStatus() throws Throwable {
+final Path testpath = path("testOpenFileWithStatus.dat");
+final byte[] dataset = TEST_DATA_BYTES;
+S3AFileStatus originalStatus =
+writeFile(testpath, dataset, dataset.length, true);
+
+// forge a file status with a different tag
+S3AFileStatus forgedStatus =
+S3AFileStatus.fromFileStatus(originalStatus, Tristate.FALSE,
+originalStatus.getETag() + "-fake",
+originalStatus.getVersionId() + "");
+fs.getMetadataStore().put(
+new PathMetadata(forgedStatus, Tristate.FALSE, false));
+
+// By passing in the status open() doesn't need to check s3guard
+// And hence the existing file is opened
+try (FSDataInputStream instream = fs.openFile(testpath)
+.withFileStatus(originalStatus)
+.build().get()) {
+   instream.read();
+}
+
+// and this holds for S3A Located Status
+try (FSDataInputStream instream = fs.openFile(testpath)
+.withFileStatus(new S3ALocatedFileStatus(originalStatus, null))
+.build().get()) {
+  instream.read();
+}
+  }
 
 Review comment:
   done. Also a contract test to verify that the status must be non-null


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1761: HADOOP-16759. Filesystem openFile() builder to take a FileStatus param

2020-01-08 Thread GitBox
steveloughran commented on a change in pull request #1761: HADOOP-16759. 
Filesystem openFile() builder to take a FileStatus param
URL: https://github.com/apache/hadoop/pull/1761#discussion_r364356490
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -4366,15 +4378,38 @@ private void requireSelectSupport(final Path source) 
throws
   InternalConstants.STANDARD_OPENFILE_KEYS,
   "for " + path + " in non-select file I/O");
 }
+FileStatus status = parameters.getStatus();
 
 Review comment:
   will do


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16794) S3 Encryption is always using default region-specific AWS-managed KMS key

2020-01-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16794:

Affects Version/s: 3.2.1

> S3 Encryption is always using default region-specific AWS-managed KMS key
> -
>
> Key: HADOOP-16794
> URL: https://issues.apache.org/jira/browse/HADOOP-16794
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Mukund Thakur
>Priority: Major
>
> When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all 
> files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the 
> wrong encryption key, always falling back to the region-specific AWS-managed 
> KMS key for S3, instead of retaining the custom CMK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16794) S3 Encryption is always using default region-specific AWS-managed KMS key

2020-01-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16794:

Component/s: fs/s3

> S3 Encryption is always using default region-specific AWS-managed KMS key
> -
>
> Key: HADOOP-16794
> URL: https://issues.apache.org/jira/browse/HADOOP-16794
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Priority: Major
>
> When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all 
> files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the 
> wrong encryption key, always falling back to the region-specific AWS-managed 
> KMS key for S3, instead of retaining the custom CMK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-6377) ChecksumFileSystem.getContentSummary throws NPE when directory contains inaccessible directories

2020-01-08 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-6377.
--
Resolution: Duplicate

> ChecksumFileSystem.getContentSummary throws NPE when directory contains 
> inaccessible directories
> 
>
> Key: HADOOP-6377
> URL: https://issues.apache.org/jira/browse/HADOOP-6377
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
>Priority: Major
>
> When getContentSummary is called on a path that contains an unreadable 
> directory, it throws NPE, since RawLocalFileSystem.listStatus(Path) returns 
> null when File.list() returns null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1761: HADOOP-16759. Filesystem openFile() builder to take a FileStatus param

2020-01-08 Thread GitBox
hadoop-yetus removed a comment on issue #1761: HADOOP-16759. Filesystem 
openFile() builder to take a FileStatus param
URL: https://github.com/apache/hadoop/pull/1761#issuecomment-565468853
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 27s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 41s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 18s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  1s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 16s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 14s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 29s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 46s |  root: The patch generated 9 new 
+ 289 unchanged - 2 fixed = 298 total (was 291)  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 3 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  12m 57s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   8m 51s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  unit  |   1m 36s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 122m 16s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.TestHarFileSystem |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1761 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 9e65e4d29757 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 65c4660 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/1/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/1/testReport/ |
   | Max. process+thread count | 1475 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For addition

[jira] [Commented] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010751#comment-17010751
 ] 

Steve Loughran commented on HADOOP-16785:
-

Looking at this, and wondering if there is more to do

FilterOutputStream.close does
{code}
public void close() throws IOException {
try (OutputStream ostream = out) {
flush();
}
}
{code}

.. which we fix.

But what if the codepath is something like
{code}
try (OutputStream ostream = new 
FilterOutputStream(abfs.createFile("mydata.csv"))) {
ostream.write()
}
{code}
And its the write() which raises an exception.

In this situation, the FilterOutputStream.close() is the one which raises the 
error, and as its the same in the write(), t-w-res will hit the same problem

In which case, the maybeThrowLastError() MUST always wrap the nested ex. This 
guarantees no addSuppressed errors anywhere

Thoughts? 

> Improve wasb and abfs resilience on double close() calls
> 
>
> Key: HADOOP-16785
> URL: https://issues.apache.org/jira/browse/HADOOP-16785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.2
>
>
> # if you call write() after the NativeAzureFsOutputStream is closed it throws 
> an NPE ... which isn't always caught by closeQuietly code. It needs to raise 
> an IOE
> # abfs close ops can trigger failures in try-with-resources use



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16642) ITestDynamoDBMetadataStoreScale fails when throttled.

2020-01-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010725#comment-17010725
 ] 

Hudson commented on HADOOP-16642:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17831 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17831/])
HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled. (stevel: 
rev 52cc20e9ea03e5b040d3fb452131dc01aee52074)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java


> ITestDynamoDBMetadataStoreScale fails when throttled.
> -
>
> Key: HADOOP-16642
> URL: https://issues.apache.org/jira/browse/HADOOP-16642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> ITestDynamoDBMetadataStoreScale tries to create a scale test iff the table 
> isn't PAYG. Its failing with the wrong text being returned.
> Proposed: don't look for any text
> {code} 
> 13:06:22 java.lang.AssertionError: 
> 13:06:22 Expected throttling message:  Expected to find ' This may be because 
> the write threshold of DynamoDB is set too low.' 
> but got unexpected exception: 
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: 
> Put tombstone on s3a://fake-bucket/moved-here: 
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
>  
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ProvisionedThroughputExceededException; 
> Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): 
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; 
> Error Code: ProvisionedThroughputExceededException; Request ID: 
> L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
> 13:06:22  at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402)
> 13
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16642) ITestDynamoDBMetadataStoreScale fails when throttled.

2020-01-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16642:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> ITestDynamoDBMetadataStoreScale fails when throttled.
> -
>
> Key: HADOOP-16642
> URL: https://issues.apache.org/jira/browse/HADOOP-16642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> ITestDynamoDBMetadataStoreScale tries to create a scale test iff the table 
> isn't PAYG. Its failing with the wrong text being returned.
> Proposed: don't look for any text
> {code} 
> 13:06:22 java.lang.AssertionError: 
> 13:06:22 Expected throttling message:  Expected to find ' This may be because 
> the write threshold of DynamoDB is set too low.' 
> but got unexpected exception: 
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: 
> Put tombstone on s3a://fake-bucket/moved-here: 
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
>  
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ProvisionedThroughputExceededException; 
> Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): 
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; 
> Error Code: ProvisionedThroughputExceededException; Request ID: 
> L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
> 13:06:22  at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402)
> 13
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16621) [pb-upgrade] spark-hive doesn't compile against hadoop trunk because of Token's marshalling

2020-01-08 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010708#comment-17010708
 ] 

Ayush Saxena commented on HADOOP-16621:
---

Thanx [~vinayakumarb] for the details.  Unless and until the compat guidelines 
doesn't blocks us from going ahead. The option #1 seems a better approach to me.
Anyway if we go by the words in the compat, it allows changes in minor releases 
to the ones marked evolving. Technically, this shouldn't violate the 
compatibility guidelines.

> [pb-upgrade] spark-hive doesn't compile against hadoop trunk because of 
> Token's marshalling
> ---
>
> Key: HADOOP-16621
> URL: https://issues.apache.org/jira/browse/HADOOP-16621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Critical
>
> the move to protobuf 3.x stops spark building because Token has a method 
> which returns a protobuf, and now its returning some v3 types.
> if we want to isolate downstream code from protobuf changes, we need to move 
> that marshalling method from token and put in a helper class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1620: HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.

2020-01-08 Thread GitBox
steveloughran closed pull request #1620: HADOOP-16642. 
ITestDynamoDBMetadataStoreScale fails when throttled.
URL: https://github.com/apache/hadoop/pull/1620
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1620: HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.

2020-01-08 Thread GitBox
steveloughran commented on issue #1620: HADOOP-16642. 
ITestDynamoDBMetadataStoreScale fails when throttled.
URL: https://github.com/apache/hadoop/pull/1620#issuecomment-572078616
 
 
   thx


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16796) Add AWS S3 Transfer acceleration support

2020-01-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010706#comment-17010706
 ] 

Steve Loughran commented on HADOOP-16796:
-

should just be a matter of changing the fs.s3a.endpoint. Why not try it? A full 
hadoop-aws test run would be interesting

we could consider extending the bucket-info command to give the status

> Add AWS S3 Transfer acceleration support
> 
>
> Key: HADOOP-16796
> URL: https://issues.apache.org/jira/browse/HADOOP-16796
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.5
>Reporter: Pat H
>Priority: Minor
>
> It would be great to be able to use [S3 Transfer acceleration 
> |[https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html]]especially
>  when reading data from multiple locations around the globe as there is a 
> significant performance improvement be routing traffic over the AWS backbone.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16796) Add AWS S3 Transfer acceleration support

2020-01-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16796:

Parent: HADOOP-15620
Issue Type: Sub-task  (was: Improvement)

> Add AWS S3 Transfer acceleration support
> 
>
> Key: HADOOP-16796
> URL: https://issues.apache.org/jira/browse/HADOOP-16796
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.5
>Reporter: Pat H
>Priority: Minor
>
> It would be great to be able to use [S3 Transfer acceleration 
> |[https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html]]especially
>  when reading data from multiple locations around the globe as there is a 
> significant performance improvement be routing traffic over the AWS backbone.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16796) Add AWS S3 Transfer acceleration support

2020-01-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16796:

Component/s: (was: hadoop-aws)
 fs/s3

> Add AWS S3 Transfer acceleration support
> 
>
> Key: HADOOP-16796
> URL: https://issues.apache.org/jira/browse/HADOOP-16796
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.5
>Reporter: Pat H
>Priority: Minor
>
> It would be great to be able to use [S3 Transfer acceleration 
> |[https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html]]especially
>  when reading data from multiple locations around the globe as there is a 
> significant performance improvement be routing traffic over the AWS backbone.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] XuQianJin-Stars commented on a change in pull request #1787: [HADOOP-16783] Exports Hadoop metrics to Prometheus PushGateWay

2020-01-08 Thread GitBox
XuQianJin-Stars commented on a change in pull request #1787: [HADOOP-16783] 
Exports Hadoop metrics to Prometheus PushGateWay
URL: https://github.com/apache/hadoop/pull/1787#discussion_r364240699
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/prometheus/PushGatewaySink.java
 ##
 @@ -0,0 +1,191 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements.  See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership.  The ASF licenses this file to you under the Apache 
License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with the 
License.  You may obtain
+ * a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.hadoop.metrics2.sink.prometheus;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.regex.Pattern;
+
+import io.prometheus.client.CollectorRegistry;
+import io.prometheus.client.Counter;
+import io.prometheus.client.Gauge;
+import io.prometheus.client.exporter.PushGateway;
+
+import org.apache.commons.configuration2.SubsetConfiguration;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.metrics2.AbstractMetric;
+import org.apache.hadoop.metrics2.MetricsException;
+import org.apache.hadoop.metrics2.MetricsRecord;
+import org.apache.hadoop.metrics2.MetricsSink;
+import org.apache.hadoop.metrics2.MetricsTag;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.metrics2.MetricType.COUNTER;
+import static org.apache.hadoop.metrics2.MetricType.GAUGE;
+
+/**
+ * A metrics sink that writes to a Prometheus PushGateWay.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public class PushGatewaySink implements MetricsSink, Closeable {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(PushGatewaySink.class);
+
+  private static final String JOB_NAME = "job";
+  private static final String HOST_KEY = "host";
+  private static final String PORT_KEY = "port";
+  private static final String GROUP_KEY = "groupingKey";
+
+  private static final Pattern SPLIT_PATTERN =
+  Pattern.compile("(? groupingKey;
+  private PushGateway pg = null;
+  private String jobName;
+
+  @Override
+  public void init(SubsetConfiguration conf) {
+// Get PushGateWay host configurations.
+jobName = conf.getString(JOB_NAME, "hadoop-job");
+final String serverHost = conf.getString(HOST_KEY);
+final int serverPort = Integer.parseInt(conf.getString(PORT_KEY));
+
+if (serverHost == null || serverHost.isEmpty() || serverPort < 1) {
+  throw new MetricsException(
+  "Invalid host/port configuration. Host: " + serverHost + " Port: " + 
serverPort);
+}
+
+groupingKey = parseGroupingKey(conf.getString(GROUP_KEY, ""));
+pg = new PushGateway(serverHost + ':' + serverPort);
+  }
+
+  @Override
+  public void putMetrics(MetricsRecord metricsRecord) {
+try {
+  CollectorRegistry registry = new CollectorRegistry();
+  for (AbstractMetric metrics : metricsRecord.metrics()) {
+if (metrics.type() == COUNTER
+|| metrics.type() == GAUGE) {
+
+  String key = getMetricsName(
+  metricsRecord.name(), metrics.name()).replace(" ", "");
+
+  int tagSize = metricsRecord.tags().size();
+  String[] labelNames = new String[tagSize];
+  String[] labelValues = new String[tagSize];
+  int index = 0;
+  for (MetricsTag tag : metricsRecord.tags()) {
+String tagName = tag.name().toLowerCase();
+
+//ignore specific tag which includes sub-hierarchy
+if (NUM_OPEN_CONNECTION_SPERUSER.equals(tagName)) {
+  continue;
+}
+labelNames[index] = tagName;
+labelValues[index] =
+tag.value() == null ? NULL : tag.value();
+index++;
+  }
+
+  switch (metrics.type()) {
+  case GAUGE:
+Gauge.build(key, key)
+.labelNames(labelNames)
+.register(registry)
+.labels(labelValues)
+.set(metrics.value().doubleValue());
+break;
+  case COUNTER:
+Counter.build(key, key)
+.labelNames(labelNames

[jira] [Created] (HADOOP-16796) Add AWS S3 Transfer acceleration support

2020-01-08 Thread Pat H (Jira)
Pat H created HADOOP-16796:
--

 Summary: Add AWS S3 Transfer acceleration support
 Key: HADOOP-16796
 URL: https://issues.apache.org/jira/browse/HADOOP-16796
 Project: Hadoop Common
  Issue Type: Improvement
  Components: hadoop-aws
Affects Versions: 2.8.5
Reporter: Pat H


It would be great to be able to use [S3 Transfer acceleration 
|[https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html]]especially
 when reading data from multiple locations around the globe as there is a 
significant performance improvement be routing traffic over the AWS backbone.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16783) Exports Hadoop metrics to Prometheus PushGateWay

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010668#comment-17010668
 ] 

Hadoop QA commented on HADOOP-16783:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m  
5s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} branch/hadoop-project no findbugs output file 
(findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 38s{color} | {color:orange} root: The patch generated 17 new + 50 unchanged 
- 0 fixed = 67 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} hadoop-project has no data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
9s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1787/5/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1787 |
| JIRA Issue | HADOOP-16783 |
| 

[GitHub] [hadoop] hadoop-yetus commented on issue #1787: [HADOOP-16783] Exports Hadoop metrics to Prometheus PushGateWay

2020-01-08 Thread GitBox
hadoop-yetus commented on issue #1787: [HADOOP-16783] Exports Hadoop metrics to 
Prometheus PushGateWay
URL: https://github.com/apache/hadoop/pull/1787#issuecomment-572044282
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 43s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  1s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 29s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 33s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 54s |  the patch passed  |
   | +1 :green_heart: |  javac  |  15m 54s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 38s |  root: The patch generated 17 new 
+ 50 unchanged - 0 fixed = 67 total (was 50)  |
   | +1 :green_heart: |  mvnsite  |   1m 58s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m  1s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  the patch passed  |
   | +0 :ok: |  findbugs  |   0m 34s |  hadoop-project has no data from 
findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 33s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   9m  9s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 114m 32s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1787/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1787 |
   | JIRA Issue | HADOOP-16783 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 02206d997f02 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7030722 |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1787/5/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1787/5/testReport/ |
   | Max. process+thread count | 1472 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1787/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14269) Create module-info.java for each module

2020-01-08 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal reassigned HADOOP-14269:
---

Assignee: (was: Adam Antal)

> Create module-info.java for each module
> ---
>
> Key: HADOOP-14269
> URL: https://issues.apache.org/jira/browse/HADOOP-14269
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> module-info.java is required for JDK9 Jigsaw feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16785.
-
Fix Version/s: 3.2.2
   Resolution: Fixed

> Improve wasb and abfs resilience on double close() calls
> 
>
> Key: HADOOP-16785
> URL: https://issues.apache.org/jira/browse/HADOOP-16785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.2
>
>
> # if you call write() after the NativeAzureFsOutputStream is closed it throws 
> an NPE ... which isn't always caught by closeQuietly code. It needs to raise 
> an IOE
> # abfs close ops can trigger failures in try-with-resources use



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1620: HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.

2020-01-08 Thread GitBox
bgaborg commented on issue #1620: HADOOP-16642. ITestDynamoDBMetadataStoreScale 
fails when throttled.
URL: https://github.com/apache/hadoop/pull/1620#issuecomment-572022149
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14269) Create module-info.java for each module

2020-01-08 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal reassigned HADOOP-14269:
---

Assignee: Adam Antal

> Create module-info.java for each module
> ---
>
> Key: HADOOP-14269
> URL: https://issues.apache.org/jira/browse/HADOOP-14269
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Adam Antal
>Priority: Major
>
> module-info.java is required for JDK9 Jigsaw feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16751) DurationInfo text parsing/formatting should be moved out of hotpath

2020-01-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010623#comment-17010623
 ] 

Hudson commented on HADOOP-16751:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17830 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17830/])
HADOOP-16751. Followup: move java import. (#1799) (github: rev 
bb1aed475b1dac1a9bf1069181016eb5b20ae55e)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DurationInfo.java


> DurationInfo text parsing/formatting should be moved out of hotpath
> ---
>
> Key: HADOOP-16751
> URL: https://issues.apache.org/jira/browse/HADOOP-16751
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
>  Labels: perfomance
> Fix For: 3.3.0
>
> Attachments: Screenshot 2019-12-09 at 10.32.33 AM.png, 
> image-2019-12-09-10-45-17-351.png
>
>
> {color:#172b4d}It would be good to lazy evaluate the text on need 
> basis.{color}
> {color:#172b4d}[https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DurationInfo.java#L68]{color}
> {color:#172b4d}All pink color in the following diagram are from this 
> codepath.{color}
>  
> {color:#172b4d}!Screenshot 2019-12-09 at 10.32.33 
> AM.png|width=1008,height=920!{color}
>  
> {color:#172b4d}!image-2019-12-09-10-45-17-351.png|width=571,height=373!{color}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010624#comment-17010624
 ] 

Hudson commented on HADOOP-16785:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17830 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17830/])
HADOOP-16785. Improve wasb and abfs resilience on double close() calls. 
(stevel: rev 17aa8f6764262767b42717cf190a53e2c1795507)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AbstractWasbTestBase.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestFileSystemOperationExceptionHandling.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/LambdaTestUtils.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java


> Improve wasb and abfs resilience on double close() calls
> 
>
> Key: HADOOP-16785
> URL: https://issues.apache.org/jira/browse/HADOOP-16785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> # if you call write() after the NativeAzureFsOutputStream is closed it throws 
> an NPE ... which isn't always caught by closeQuietly code. It needs to raise 
> an IOE
> # abfs close ops can trigger failures in try-with-resources use



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16772) Extract version numbers to head of pom.xml (addendum)

2020-01-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010616#comment-17010616
 ] 

Hudson commented on HADOOP-16772:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17829 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17829/])
HADOOP-16772. Extract version numbers to head of pom.xml (addendum) 
(gabor.bota: rev f1f3f23c3c74461ada986c561743d84f9dd4a507)
* (edit) hadoop-project/pom.xml


> Extract version numbers to head of pom.xml (addendum)
> -
>
> Key: HADOOP-16772
> URL: https://issues.apache.org/jira/browse/HADOOP-16772
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tamas Penzes
>Assignee: Tamas Penzes
>Priority: Major
>
> Frogotten to extract a few version numbers, this is a follow up ticket of 
> HADOOP-16729.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #1799: HADOOP-16751. Followup: move java import.

2020-01-08 Thread GitBox
steveloughran merged pull request #1799: HADOOP-16751. Followup: move java 
import.
URL: https://github.com/apache/hadoop/pull/1799
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16772) Extract version numbers to head of pom.xml (addendum)

2020-01-08 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010597#comment-17010597
 ] 

Gabor Bota commented on HADOOP-16772:
-

I committed 1773 as well. Thanks for your contribution [~tamaas]!

> Extract version numbers to head of pom.xml (addendum)
> -
>
> Key: HADOOP-16772
> URL: https://issues.apache.org/jira/browse/HADOOP-16772
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tamas Penzes
>Assignee: Tamas Penzes
>Priority: Major
>
> Frogotten to extract a few version numbers, this is a follow up ticket of 
> HADOOP-16729.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg merged pull request #1773: HADOOP-16772. Extract version numbers to head of pom.xml (addendum)

2020-01-08 Thread GitBox
bgaborg merged pull request #1773: HADOOP-16772. Extract version numbers to 
head of pom.xml (addendum)
URL: https://github.com/apache/hadoop/pull/1773
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1800: HDFS-15100. RBF: Print stacktrace when DFSRouter fails to fetch/parse JMX output from NameNode.

2020-01-08 Thread GitBox
hadoop-yetus commented on issue #1800: HDFS-15100. RBF: Print stacktrace when 
DFSRouter fails to fetch/parse JMX output from NameNode.
URL: https://github.com/apache/hadoop/pull/1800#issuecomment-571965778
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m 21s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 25s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  7s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  5s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 17s |  
hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 28s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 10s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 27s |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  65m 47s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1800/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1800 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c4dcbbafe1d6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / aba3f6c |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1800/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1800/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1800/1/testReport/ |
   | Max. process+thread count | 2453 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1800/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on issue #1769: HADOOP-16761. KMSClientProvider does not work with client using ticke…

2020-01-08 Thread GitBox
nandakumar131 commented on issue #1769: HADOOP-16761. KMSClientProvider does 
not work with client using ticke…
URL: https://github.com/apache/hadoop/pull/1769#issuecomment-571960993
 
 
   Thanks @xiaoyuyao for working on this.
   
   For externally managed subjects both `actualUgi#isFromKeytab` and 
`actualUgi#isFromTicket` will return false, as both the call checks 
`isHadoopLogin`.
   Even with this change we will end up executing `actualUgi = 
UserGroupInformation.getLoginUser()` for externally managed subjects.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14269) Create module-info.java for each module

2020-01-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14269:
---
Parent Issue: HADOOP-16795  (was: HADOOP-15338)

> Create module-info.java for each module
> ---
>
> Key: HADOOP-14269
> URL: https://issues.apache.org/jira/browse/HADOOP-14269
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> module-info.java is required for JDK9 Jigsaw feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15786) [JDK 11] Add automatic module name

2020-01-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15786:
---
Parent Issue: HADOOP-16795  (was: HADOOP-15338)

> [JDK 11] Add automatic module name
> --
>
> Key: HADOOP-15786
> URL: https://issues.apache.org/jira/browse/HADOOP-15786
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> HADOOP-14269 proposes adding module-info.java for each module, however, it is 
> too complex and difficult for now because we should support Java 8, which 
> does not support module-info.java. 
> Adding automatic module name in MANIFEST.MF can add module name without 
> module-info.java, and it helps other projects a lot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16115) [JDK 11] TestHttpServer#testJersey fails

2020-01-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16115:
---
Parent Issue: HADOOP-16795  (was: HADOOP-15338)

> [JDK 11] TestHttpServer#testJersey fails
> 
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: mvn-test-11.0.4.log
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.http.TestHttpServer
> [ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 5.954 s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
> [ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 
> 0.128 s  <<< ERROR!
> java.io.IOException: Server returned HTTP response code: 500 for URL: 
> http://localhost:40339/jersey/foo?op=bar
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
>   at 
> org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
>   at 
> org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16795) Java 11 compile support

2020-01-08 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16795:
--

 Summary: Java 11 compile support
 Key: HADOOP-16795
 URL: https://issues.apache.org/jira/browse/HADOOP-16795
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


Split from HADOOP-15338.
Now Hadoop must be compiled with Java 8. This issue is to support compiling 
Hadoop with Java 11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15338) Java 11 runtime support

2020-01-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15338:
---
Summary: Java 11 runtime support  (was: Support Java 11 LTS in Hadoop)

> Java 11 runtime support
> ---
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15338) Support Java 11 LTS in Hadoop

2020-01-08 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010478#comment-17010478
 ] 

Akira Ajisaka commented on HADOOP-15338:


Thanks [~brahmareddy] for your comment.
I think Java 11 runtime support is finished but compile support is not 
finished. I'll split this jira into two issues (runtime and compile) and close 
runtime support issue.

> Support Java 11 LTS in Hadoop
> -
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16794) S3 Encryption is always using default region-specific AWS-managed KMS key

2020-01-08 Thread Mukund Thakur (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010476#comment-17010476
 ] 

Mukund Thakur commented on HADOOP-16794:


CC [~shwethags] 

> S3 Encryption is always using default region-specific AWS-managed KMS key
> -
>
> Key: HADOOP-16794
> URL: https://issues.apache.org/jira/browse/HADOOP-16794
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Mukund Thakur
>Priority: Major
>
> When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all 
> files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the 
> wrong encryption key, always falling back to the region-specific AWS-managed 
> KMS key for S3, instead of retaining the custom CMK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16794) S3 Encryption is always using default region-specific AWS-managed KMS key

2020-01-08 Thread Mukund Thakur (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukund Thakur updated HADOOP-16794:
---
Description: When using (bucket-level) S3 Default Encryption with SSE-KMS 
and a CMK, all files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme 
receive the wrong encryption key, always falling back to the region-specific 
AWS-managed KMS key for S3, instead of retaining the custom CMK.

> S3 Encryption is always using default region-specific AWS-managed KMS key
> -
>
> Key: HADOOP-16794
> URL: https://issues.apache.org/jira/browse/HADOOP-16794
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Mukund Thakur
>Priority: Major
>
> When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all 
> files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the 
> wrong encryption key, always falling back to the region-specific AWS-managed 
> KMS key for S3, instead of retaining the custom CMK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16794) S3 Encryption is always using default region-specific AWS-managed KMS key

2020-01-08 Thread Mukund Thakur (Jira)
Mukund Thakur created HADOOP-16794:
--

 Summary: S3 Encryption is always using default region-specific 
AWS-managed KMS key
 Key: HADOOP-16794
 URL: https://issues.apache.org/jira/browse/HADOOP-16794
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Mukund Thakur






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #1800: HDFS-15100. RBF: Print stacktrace when DFSRouter fails to fetch/parse JMX output from NameNode.

2020-01-08 Thread GitBox
aajisaka opened a new pull request #1800: HDFS-15100. RBF: Print stacktrace 
when DFSRouter fails to fetch/parse JMX output from NameNode.
URL: https://github.com/apache/hadoop/pull/1800
 
 
   JIRA: https://issues.apache.org/jira/browse/HDFS-15100


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org