[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=600923&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600923
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 23/May/21 12:05
Start Date: 23/May/21 12:05
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #2767:
URL: https://github.com/apache/hadoop/pull/2767#discussion_r637485155



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine2.java
##
@@ -495,6 +524,7 @@ private static ProtoClassProtoImpl 
getProtocolImpl(RPC.Server server,
* it is.
* 
*/
+  @SuppressWarnings("deprecation")

Review comment:
   It would help a lot for the applications using the new 
ProtobufRpcEngine2 what API replaces this deprecated API.
   Maybe it can be written in the javadoc; or in the release note.

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
##
@@ -443,144 +430,52 @@ public Server(Class protocolClass, Object 
protocolImpl,
 SecretManager secretManager, 
 String portRangeConfig, AlignmentContext alignmentContext)
 throws IOException {
-  super(bindAddress, port, null, numHandlers,
-  numReaders, queueSizePerHandler, conf,
-  serverNameFromClass(protocolImpl.getClass()), secretManager,
-  portRangeConfig);
-  setAlignmentContext(alignmentContext);
-  this.verbose = verbose;  
-  registerProtocolAndImpl(RPC.RpcKind.RPC_PROTOCOL_BUFFER, protocolClass,
-  protocolImpl);
+  super(protocolClass, protocolImpl, conf, bindAddress, port, numHandlers,
+  numReaders, queueSizePerHandler, verbose, secretManager,
+  portRangeConfig, alignmentContext);
 }
 
-@Override
-protected RpcInvoker getServerRpcInvoker(RpcKind rpcKind) {
-  if (rpcKind == RpcKind.RPC_PROTOCOL_BUFFER) {
-return RPC_INVOKER;
-  }
-  return super.getServerRpcInvoker(rpcKind);
-}
-
-/**
- * Protobuf invoker for {@link RpcInvoker}
- */
-static class ProtoBufRpcInvoker implements RpcInvoker {
-  private static ProtoClassProtoImpl getProtocolImpl(RPC.Server server,
-  String protoName, long clientVersion) throws RpcServerException {
-ProtoNameVer pv = new ProtoNameVer(protoName, clientVersion);
-ProtoClassProtoImpl impl = 
-server.getProtocolImplMap(RPC.RpcKind.RPC_PROTOCOL_BUFFER).get(pv);
-if (impl == null) { // no match for Protocol AND Version
-  VerProtocolImpl highest = 
-  
server.getHighestSupportedProtocol(RPC.RpcKind.RPC_PROTOCOL_BUFFER, 
-  protoName);
-  if (highest == null) {
-throw new RpcNoSuchProtocolException(
-"Unknown protocol: " + protoName);
-  }
-  // protocol supported but not the version that client wants
-  throw new RPC.VersionMismatch(protoName, clientVersion,
-  highest.version);
-}
-return impl;
+static RpcWritable processCall(RPC.Server server,

Review comment:
   can you add a comment here that this is practically the same as 
ProtobufRpccEngine2.call() except the Message class, and that if this method is 
modified, the other method should be updated as well? (Or add the comment in 
the ProtobufRpccEngine2.call())

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
##
@@ -937,11 +937,18 @@ public int hashCode() {
 */
static class ProtoClassProtoImpl {
  final Class protocolClass;
- final Object protocolImpl; 
+  final Object protocolImpl;
+  private final boolean newPBImpl;
+
  ProtoClassProtoImpl(Class protocolClass, Object protocolImpl) {
this.protocolClass = protocolClass;
this.protocolImpl = protocolImpl;
+this.newPBImpl = protocolImpl instanceof BlockingService;
  }
+
+  public boolean isNewPBImpl() {

Review comment:
   Might be easier to understand to call it "isShadedPBImpl()" instead




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600923)
Time Spent: 3h 20m  (was: 3h 10m)

> Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
> --
>
> Key: HDFS-15790
> URL: https://issues.apache.org/jira/browse/HDFS-15790
> Project: Hadoop HDFS
>  Issue T

[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=600929&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600929
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 23/May/21 14:01
Start Date: 23/May/21 14:01
Worklog Time Spent: 10m 
  Work Description: vinayakumarb commented on a change in pull request 
#2767:
URL: https://github.com/apache/hadoop/pull/2767#discussion_r637551551



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine2.java
##
@@ -495,6 +524,7 @@ private static ProtoClassProtoImpl 
getProtocolImpl(RPC.Server server,
* it is.
* 
*/
+  @SuppressWarnings("deprecation")

Review comment:
   This was to suppress javac warnings due to usage of 
{{ProtobufRpcEngine}} in {{ProtobufRpcEngine2}}. I have moved to new method, 
where this exact usage is present.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600929)
Time Spent: 3.5h  (was: 3h 20m)

> Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
> --
>
> Key: HDFS-15790
> URL: https://issues.apache.org/jira/browse/HDFS-15790
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available, release-blocker
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive 
> project.  This was not an awesome thing to do between minor versions in 
> regards to backwards compatibility for downstream projects.
> Additionally, these two frameworks are not drop-in replacements, they have 
> some differences.  Also, Protobuf 2 is not deprecated or anything so let us 
> have both protocols available at the same time.  In Hadoop 4.x Protobuf 2 
> support can be dropped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=600930&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600930
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 23/May/21 14:02
Start Date: 23/May/21 14:02
Worklog Time Spent: 10m 
  Work Description: vinayakumarb commented on a change in pull request 
#2767:
URL: https://github.com/apache/hadoop/pull/2767#discussion_r637551571



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
##
@@ -937,11 +937,18 @@ public int hashCode() {
 */
static class ProtoClassProtoImpl {
  final Class protocolClass;
- final Object protocolImpl; 
+  final Object protocolImpl;
+  private final boolean newPBImpl;
+
  ProtoClassProtoImpl(Class protocolClass, Object protocolImpl) {
this.protocolClass = protocolClass;
this.protocolImpl = protocolImpl;
+this.newPBImpl = protocolImpl instanceof BlockingService;
  }
+
+  public boolean isNewPBImpl() {

Review comment:
   Done

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
##
@@ -443,144 +430,52 @@ public Server(Class protocolClass, Object 
protocolImpl,
 SecretManager secretManager, 
 String portRangeConfig, AlignmentContext alignmentContext)
 throws IOException {
-  super(bindAddress, port, null, numHandlers,
-  numReaders, queueSizePerHandler, conf,
-  serverNameFromClass(protocolImpl.getClass()), secretManager,
-  portRangeConfig);
-  setAlignmentContext(alignmentContext);
-  this.verbose = verbose;  
-  registerProtocolAndImpl(RPC.RpcKind.RPC_PROTOCOL_BUFFER, protocolClass,
-  protocolImpl);
+  super(protocolClass, protocolImpl, conf, bindAddress, port, numHandlers,
+  numReaders, queueSizePerHandler, verbose, secretManager,
+  portRangeConfig, alignmentContext);
 }
 
-@Override
-protected RpcInvoker getServerRpcInvoker(RpcKind rpcKind) {
-  if (rpcKind == RpcKind.RPC_PROTOCOL_BUFFER) {
-return RPC_INVOKER;
-  }
-  return super.getServerRpcInvoker(rpcKind);
-}
-
-/**
- * Protobuf invoker for {@link RpcInvoker}
- */
-static class ProtoBufRpcInvoker implements RpcInvoker {
-  private static ProtoClassProtoImpl getProtocolImpl(RPC.Server server,
-  String protoName, long clientVersion) throws RpcServerException {
-ProtoNameVer pv = new ProtoNameVer(protoName, clientVersion);
-ProtoClassProtoImpl impl = 
-server.getProtocolImplMap(RPC.RpcKind.RPC_PROTOCOL_BUFFER).get(pv);
-if (impl == null) { // no match for Protocol AND Version
-  VerProtocolImpl highest = 
-  
server.getHighestSupportedProtocol(RPC.RpcKind.RPC_PROTOCOL_BUFFER, 
-  protoName);
-  if (highest == null) {
-throw new RpcNoSuchProtocolException(
-"Unknown protocol: " + protoName);
-  }
-  // protocol supported but not the version that client wants
-  throw new RPC.VersionMismatch(protoName, clientVersion,
-  highest.version);
-}
-return impl;
+static RpcWritable processCall(RPC.Server server,

Review comment:
   Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600930)
Time Spent: 3h 40m  (was: 3.5h)

> Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
> --
>
> Key: HDFS-15790
> URL: https://issues.apache.org/jira/browse/HDFS-15790
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available, release-blocker
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive 
> project.  This was not an awesome thing to do between minor versions in 
> regards to backwards compatibility for downstream projects.
> Additionally, these two frameworks are not drop-in replacements, they have 
> some differences.  Also, Protobuf 2 is not deprecated or anything so let us 
> have both protocols available at the same time.  In Hadoop 4.x Protobuf 2 
> support can be dropped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

--

[jira] [Resolved] (HDFS-16026) Restore cross platform mkstemp

2021-05-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16026.
---
Resolution: Abandoned

This fix will be tracked as part of HDFS-15971.

> Restore cross platform mkstemp
> --
>
> Key: HDFS-16026
> URL: https://issues.apache.org/jira/browse/HDFS-16026
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15971) Make mkstemp cross platform

2021-05-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HDFS-15971:
--
Attachment: commit-details.txt

> Make mkstemp cross platform
> ---
>
> Key: HDFS-15971
> URL: https://issues.apache.org/jira/browse/HDFS-15971
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: commit-details.txt
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> mkstemp isn't available in Visual C++. Need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15971) Make mkstemp cross platform

2021-05-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HDFS-15971:
--
Attachment: (was: build-log5.zip)

> Make mkstemp cross platform
> ---
>
> Key: HDFS-15971
> URL: https://issues.apache.org/jira/browse/HDFS-15971
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: commit-details.txt
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> mkstemp isn't available in Visual C++. Need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15971) Make mkstemp cross platform

2021-05-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HDFS-15971:
--
Attachment: build-log5.zip

> Make mkstemp cross platform
> ---
>
> Key: HDFS-15971
> URL: https://issues.apache.org/jira/browse/HDFS-15971
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: commit-details.txt
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> mkstemp isn't available in Visual C++. Need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15971) Make mkstemp cross platform

2021-05-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HDFS-15971:
--
Attachment: build-log.zip

> Make mkstemp cross platform
> ---
>
> Key: HDFS-15971
> URL: https://issues.apache.org/jira/browse/HDFS-15971
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log.zip, commit-details.txt
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> mkstemp isn't available in Visual C++. Need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15971) Make mkstemp cross platform

2021-05-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HDFS-15971:
--
Attachment: (was: commit-details.txt)

> Make mkstemp cross platform
> ---
>
> Key: HDFS-15971
> URL: https://issues.apache.org/jira/browse/HDFS-15971
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log.zip, commit-details.txt
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> mkstemp isn't available in Visual C++. Need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15971) Make mkstemp cross platform

2021-05-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HDFS-15971:
--
Attachment: commit-details.txt

> Make mkstemp cross platform
> ---
>
> Key: HDFS-15971
> URL: https://issues.apache.org/jira/browse/HDFS-15971
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log.zip, commit-details.txt
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> mkstemp isn't available in Visual C++. Need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15971) Make mkstemp cross platform

2021-05-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HDFS-15971:
--
Attachment: Dockerfile_centos_7

> Make mkstemp cross platform
> ---
>
> Key: HDFS-15971
> URL: https://issues.apache.org/jira/browse/HDFS-15971
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: Dockerfile_centos_7, build-log.zip, commit-details.txt
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> mkstemp isn't available in Visual C++. Need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15971) Make mkstemp cross platform

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15971?focusedWorklogId=600942&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600942
 ]

ASF GitHub Bot logged work on HDFS-15971:
-

Author: ASF GitHub Bot
Created on: 23/May/21 16:36
Start Date: 23/May/21 16:36
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra opened a new pull request #3044:
URL: https://github.com/apache/hadoop/pull/3044


   * This reverts commit
 aed13f0f42fefe30a53eb73c65c2072a031f173e.
   * Verified by building locally on Centos 7 that
 Hadoop builds fine with this PR.
   * Build log -
 https://issues.apache.org/jira/secure/attachment/13025814/build-log.zip
 Reverted commit -
 
https://issues.apache.org/jira/secure/attachment/13025815/commit-details.txt
 Dockerfile_centos_7 -
 
https://issues.apache.org/jira/secure/attachment/13025816/Dockerfile_centos_7
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600942)
Time Spent: 1h 20m  (was: 1h 10m)

> Make mkstemp cross platform
> ---
>
> Key: HDFS-15971
> URL: https://issues.apache.org/jira/browse/HDFS-15971
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: Dockerfile_centos_7, build-log.zip, commit-details.txt
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> mkstemp isn't available in Visual C++. Need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15971) Make mkstemp cross platform

2021-05-23 Thread Gautham Banasandra (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17350077#comment-17350077
 ] 

Gautham Banasandra commented on HDFS-15971:
---

Hi [~ebadger],

I was able to verify that Hadoop trunk builds fine with my fix on Centos 7. 
Please refer to the [^build-log.zip] and  [^commit-details.txt]. I used  
[^Dockerfile_centos_7] docker image for building.

The most likely reason why the build could've failed in your system is because 
you might be using an older version of CMake. Could you please ensure that 
you're running CMake 3.19 or higher? Please feel free to use  
[^Dockerfile_centos_7] if you would like to verify too.

I'm restoring my PR - https://github.com/apache/hadoop/pull/3044. [~elgoiri] 
please review this.

> Make mkstemp cross platform
> ---
>
> Key: HDFS-15971
> URL: https://issues.apache.org/jira/browse/HDFS-15971
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: Dockerfile_centos_7, build-log.zip, commit-details.txt
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> mkstemp isn't available in Visual C++. Need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=600946&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600946
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 23/May/21 16:57
Start Date: 23/May/21 16:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2767:
URL: https://github.com/apache/hadoop/pull/2767#issuecomment-846593536


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  cc  |  20m  8s | 
[/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/8/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 22 new + 305 unchanged - 22 
fixed = 327 total (was 327)  |
   | +1 :green_heart: |  javac  |  20m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  cc  |  18m 11s | 
[/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/8/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 34 new + 293 
unchanged - 34 fixed = 327 total (was 327)  |
   | +1 :green_heart: |  javac  |  18m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  8s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/8/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 2 new + 210 
unchanged - 7 fixed = 212 total (was 217)  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  8s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 179m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=

[jira] [Work logged] (HDFS-16026) Restore cross platform mkstemp

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16026?focusedWorklogId=600948&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600948
 ]

ASF GitHub Bot logged work on HDFS-16026:
-

Author: ASF GitHub Bot
Created on: 23/May/21 17:00
Start Date: 23/May/21 17:00
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra closed pull request #3014:
URL: https://github.com/apache/hadoop/pull/3014


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600948)
Time Spent: 4.5h  (was: 4h 20m)

> Restore cross platform mkstemp
> --
>
> Key: HDFS-16026
> URL: https://issues.apache.org/jira/browse/HDFS-16026
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16026) Restore cross platform mkstemp

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16026?focusedWorklogId=600947&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600947
 ]

ASF GitHub Bot logged work on HDFS-16026:
-

Author: ASF GitHub Bot
Created on: 23/May/21 17:00
Start Date: 23/May/21 17:00
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on pull request #3014:
URL: https://github.com/apache/hadoop/pull/3014#issuecomment-846593890


   Abandoning this. Will be handled in 
https://github.com/apache/hadoop/pull/3044.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600947)
Time Spent: 4h 20m  (was: 4h 10m)

> Restore cross platform mkstemp
> --
>
> Key: HDFS-16026
> URL: https://issues.apache.org/jira/browse/HDFS-16026
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15971) Make mkstemp cross platform

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15971?focusedWorklogId=600974&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600974
 ]

ASF GitHub Bot logged work on HDFS-15971:
-

Author: ASF GitHub Bot
Created on: 23/May/21 19:57
Start Date: 23/May/21 19:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3044:
URL: https://github.com/apache/hadoop/pull/3044#issuecomment-846616162


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   2m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  58m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  cc  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  cc  |   2m 47s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 47s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 102m 44s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 199m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3044/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3044 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux 6a91a863a395 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5ad00b64df6995a4bf0bfb23b34fc8a66022ddbf |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3044/1/testReport/ |
   | Max. process+thread count | 586 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3044/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600974)
Time Spent: 1.5h  (was: 1h 20m)

> Make mkstemp cross platform
> ---
>
> Key: HDFS-15971
>  

[jira] [Work logged] (HDFS-16031) Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16031?focusedWorklogId=600982&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600982
 ]

ASF GitHub Bot logged work on HDFS-16031:
-

Author: ASF GitHub Bot
Created on: 23/May/21 22:45
Start Date: 23/May/21 22:45
Worklog Time Spent: 10m 
  Work Description: Nargeshdb commented on pull request #3027:
URL: https://github.com/apache/hadoop/pull/3027#issuecomment-846635685


   > Thanks for the patch.
   @aajisaka Thanks for the feedback.
   > We should use try-with-resources to close the resources.
   
https://github.com/apache/hadoop/pull/3027/commits/d0d81d605cadde6dd7ecc0813e2a929e31d22f97
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600982)
Time Spent: 0.5h  (was: 20m)

> Possible Resource Leak in 
> org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap
> -
>
> Key: HDFS-16031
> URL: https://issues.apache.org/jira/browse/HDFS-16031
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Narges Shadab
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We notice a possible resource leak in 
> [getCompressedAliasMap|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java#L320].
>  If {{finish()}} at line 334 throws an IOException, then {{tOut, gzOut}} and 
> {{bOut}} remain open since the exception isn't caught locally, and there is 
> no way for any caller to close them.
> I've submitted a pull request to fix it.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15914) Possible Resource Leak in org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap

2021-05-23 Thread Narges Shadab (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17350153#comment-17350153
 ] 

Narges Shadab commented on HDFS-15914:
--

I was wondering if I could get feedback on this PR.

> Possible Resource Leak in 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap
> 
>
> Key: HDFS-15914
> URL: https://issues.apache.org/jira/browse/HDFS-15914
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Narges Shadab
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We noticed a possible resource leak 
> [here|https://github.com/apache/hadoop/blob/174f3a96b10a0ab0fd8aed1b0f904ca5f0c3f268/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java#L370].
>  {{createInputStream}} might throw an {{IOException}} and if it occurs, {{i}} 
> remains open since the exception isn't caught locally, and there is no way 
> for any caller to close the FSDataInputStream that is returned by 
> {{fs.open(file)}} at line 371.
> I'll submit a pull request to fix it.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13183) Standby NameNode process getBlocks request to reduce Active load

2021-05-23 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13183:
---
Release Note: 
Enable balancer to redirect getBlocks request to a Standby Namenode, thus 
reducing the performance impact of balancer to the Active NameNode.

The feature is disabled by default. To enable it, configure the hdfs-site.xml 
of balancer: 
dfs.ha.allow.stale.reads = true.

> Standby NameNode process getBlocks request to reduce Active load
> 
>
> Key: HDFS-13183
> URL: https://issues.apache.org/jira/browse/HDFS-13183
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover, namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-13183-trunk.001.patch, HDFS-13183-trunk.002.patch, 
> HDFS-13183-trunk.003.patch, HDFS-13183.004.patch, HDFS-13183.005.patch, 
> HDFS-13183.006.patch, HDFS-13183.007.patch, HDFS-13183.addendum.patch, 
> HDFS-13183.addendum.patch
>
>
> The performance of Active NameNode could be impact when {{Balancer}} requests 
> #getBlocks, since query blocks of overly full DNs performance is extremely 
> inefficient currently. The main reason is {{NameNodeRpcServer#getBlocks}} 
> hold read lock for long time. In extreme case, all handlers of Active 
> NameNode RPC server are occupied by one reader 
> {{NameNodeRpcServer#getBlocks}} and other write operation calls, thus Active 
> NameNode enter a state of false death for number of seconds even for minutes.
> The similar performance concerns of Balancer have reported by HDFS-9412, 
> HDFS-7967, etc.
> If Standby NameNode can shoulder #getBlocks heavy burden, it could speed up 
> the progress of balancing and reduce performance impact to Active NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15942) Increase Quota initialization threads

2021-05-23 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-15942:
---
Release Note: The default quota initialization thread count during the 
NameNode startup process (dfs.namenode.quota.init-threads) is increased from 4 
to 12.

> Increase Quota initialization threads
> -
>
> Key: HDFS-15942
> URL: https://issues.apache.org/jira/browse/HDFS-15942
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15942.001.patch
>
>
> On large namespaces, the quota initialization at started can take a long time 
> with the default 4 threads. Also on NN failover, often the quota needs to be 
> calculated before the failover can completed, delaying the failover.
> I performed some benchmarks some time back on a large image (316M inodes 35GB 
> on disk), the quota load takes:
> {code}
> quota - 4  threads 39 seconds
> quota - 8  threads 23 seconds
> quota - 12 threads 20 seconds
> quota - 16 threads 15 seconds
> {code}
> As the quota is calculated when the NN is starting up (and hence doing no 
> other work) or at failover time before the new standby becomes active, I 
> think the quota should use as many threads as possible.
> I proposed we change the default to 8 or 12 on at least trunk and branch-3.3 
> so we have a better default going forward.
> Has anyone got any other thoughts?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15916) DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff

2021-05-23 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-15916:
---
Target Version/s: 3.1.4, 3.4.0, 3.2.3, 3.3.2  (was: 3.1.4, 3.3.1, 3.4.0, 
3.2.3)

> DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for 
> snapshotdiff
> ---
>
> Key: HDFS-15916
> URL: https://issues.apache.org/jira/browse/HDFS-15916
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.2.2
>Reporter: Srinivasu Majeti
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Looks like when using distcp diff options between two snapshots from a hadoop 
> 3 cluster to hadoop 2 cluster , we get below exception and seems to be break 
> backward compatibility due to new API introduction 
> getSnapshotDiffReportListing.
>  
> {code:java}
> hadoop distcp -diff s1 s2 -update src_cluster_path dst_cluster_path
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException):
>  Unknown method getSnapshotDiffReportListing called on 
> org.apache.hadoop.hdfs.protocol.ClientProtocol protocol
>  {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15345) RBF: RouterPermissionChecker#checkSuperuserPrivilege should use UGI#getGroups after HADOOP-13442

2021-05-23 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-15345:
---
Target Version/s: 3.3.2

> RBF: RouterPermissionChecker#checkSuperuserPrivilege should use UGI#getGroups 
> after HADOOP-13442
> 
>
> Key: HDFS-15345
> URL: https://issues.apache.org/jira/browse/HDFS-15345
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-13442 added UGI#getGroups to avoid list->array->list conversions. This 
> ticket is opened to change  RouterPermissionChecker#checkSuperuserPrivilege 
> to use UGI#getGroups. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15344) DataNode#checkSuperuserPrivilege should use UGI#getGroups after HADOOP-13442

2021-05-23 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-15344:
---
Target Version/s: 3.3.2

> DataNode#checkSuperuserPrivilege should use UGI#getGroups after HADOOP-13442
> 
>
> Key: HDFS-15344
> URL: https://issues.apache.org/jira/browse/HDFS-15344
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-13442 added UGI#getGroups to avoid list->array->list conversions. This 
> ticket is opened to change DataNode#checkSuperuserPrivilege to use 
> UGI#getGroups. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15660) StorageTypeProto is not compatiable between 3.x and 2.6

2021-05-23 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-15660:
---
Target Version/s: 2.10.2, 3.3.2, 2.9.3  (was: 2.9.3, 2.10.2)

> StorageTypeProto is not compatiable between 3.x and 2.6
> ---
>
> Key: HDFS-15660
> URL: https://issues.apache.org/jira/browse/HDFS-15660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.0.1, 2.9.2, 2.8.5, 2.7.7, 2.10.1
>Reporter: Ryan Wu
>Assignee: Ryan Wu
>Priority: Major
> Fix For: 2.9.3, 3.4.0, 2.10.2
>
> Attachments: HDFS-15660.002.patch, HDFS-15660.003.patch
>
>
> In our case, when nn has upgraded to 3.1.3 and dn’s version was still 2.6,  
> we found hive to call getContentSummary method , the client and server was 
> not compatible  because of hadoop3 added new PROVIDED storage type.
> {code:java}
> // code placeholder
> 20/04/15 14:28:35 INFO retry.RetryInvocationHandler---main: Exception while 
> invoking getContentSummary of class ClientNamenodeProtocolTranslatorPB over 
> x/x:8020. Trying to fail over immediately.
> java.io.IOException: com.google.protobuf.ServiceException: 
> com.google.protobuf.UninitializedMessageException: Message missing required 
> fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:819)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>         at com.sun.proxy.$Proxy11.getContentSummary(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:3144)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:706)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
>         at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:713)
>         at org.apache.hadoop.fs.shell.Count.processPath(Count.java:109)
>         at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
>         at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
>         at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
>         at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
>         at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
>         at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> Caused by: com.google.protobuf.ServiceException: 
> com.google.protobuf.UninitializedMessageException: Message missing required 
> fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:272)
>         at com.sun.proxy.$Proxy10.getContentSummary(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:816)
>         ... 23 more
> Caused by: com.google.protobuf.UninitializedMessageException: Message missing 
> required fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65392)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65331)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:263)
>         ... 25 more
> {code}
> This compatible issue only hap

[jira] [Commented] (HDFS-15660) StorageTypeProto is not compatiable between 3.x and 2.6

2021-05-23 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17350215#comment-17350215
 ] 

Wei-Chiu Chuang commented on HDFS-15660:


I'm going to cherrypick this commit to 3.3.1. We still have a substantial 
number of customers on the Hadoop 2.6 line and this could be a problem for them 
during upgrade.

HDFS-15025 adds a new storage type but not a new field, so shouldn't be a 
problem.

> StorageTypeProto is not compatiable between 3.x and 2.6
> ---
>
> Key: HDFS-15660
> URL: https://issues.apache.org/jira/browse/HDFS-15660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.0.1, 2.9.2, 2.8.5, 2.7.7, 2.10.1
>Reporter: Ryan Wu
>Assignee: Ryan Wu
>Priority: Major
> Fix For: 2.9.3, 3.4.0, 2.10.2
>
> Attachments: HDFS-15660.002.patch, HDFS-15660.003.patch
>
>
> In our case, when nn has upgraded to 3.1.3 and dn’s version was still 2.6,  
> we found hive to call getContentSummary method , the client and server was 
> not compatible  because of hadoop3 added new PROVIDED storage type.
> {code:java}
> // code placeholder
> 20/04/15 14:28:35 INFO retry.RetryInvocationHandler---main: Exception while 
> invoking getContentSummary of class ClientNamenodeProtocolTranslatorPB over 
> x/x:8020. Trying to fail over immediately.
> java.io.IOException: com.google.protobuf.ServiceException: 
> com.google.protobuf.UninitializedMessageException: Message missing required 
> fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:819)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>         at com.sun.proxy.$Proxy11.getContentSummary(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:3144)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:706)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
>         at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:713)
>         at org.apache.hadoop.fs.shell.Count.processPath(Count.java:109)
>         at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
>         at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
>         at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
>         at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
>         at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
>         at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> Caused by: com.google.protobuf.ServiceException: 
> com.google.protobuf.UninitializedMessageException: Message missing required 
> fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:272)
>         at com.sun.proxy.$Proxy10.getContentSummary(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:816)
>         ... 23 more
> Caused by: com.google.protobuf.UninitializedMessageException: Message missing 
> required fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65392)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProto

[jira] [Updated] (HDFS-15660) StorageTypeProto is not compatiable between 3.x and 2.6

2021-05-23 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-15660:
---
Fix Version/s: 3.3.1

> StorageTypeProto is not compatiable between 3.x and 2.6
> ---
>
> Key: HDFS-15660
> URL: https://issues.apache.org/jira/browse/HDFS-15660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.0.1, 2.9.2, 2.8.5, 2.7.7, 2.10.1
>Reporter: Ryan Wu
>Assignee: Ryan Wu
>Priority: Major
> Fix For: 2.9.3, 3.3.1, 3.4.0, 2.10.2
>
> Attachments: HDFS-15660.002.patch, HDFS-15660.003.patch
>
>
> In our case, when nn has upgraded to 3.1.3 and dn’s version was still 2.6,  
> we found hive to call getContentSummary method , the client and server was 
> not compatible  because of hadoop3 added new PROVIDED storage type.
> {code:java}
> // code placeholder
> 20/04/15 14:28:35 INFO retry.RetryInvocationHandler---main: Exception while 
> invoking getContentSummary of class ClientNamenodeProtocolTranslatorPB over 
> x/x:8020. Trying to fail over immediately.
> java.io.IOException: com.google.protobuf.ServiceException: 
> com.google.protobuf.UninitializedMessageException: Message missing required 
> fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:819)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>         at com.sun.proxy.$Proxy11.getContentSummary(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:3144)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:706)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
>         at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:713)
>         at org.apache.hadoop.fs.shell.Count.processPath(Count.java:109)
>         at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
>         at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
>         at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
>         at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
>         at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
>         at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> Caused by: com.google.protobuf.ServiceException: 
> com.google.protobuf.UninitializedMessageException: Message missing required 
> fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:272)
>         at com.sun.proxy.$Proxy10.getContentSummary(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:816)
>         ... 23 more
> Caused by: com.google.protobuf.UninitializedMessageException: Message missing 
> required fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65392)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65331)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:263)
>         ... 25 more
> {code}
> This compatible issue only happened in StorageType feature is u

[jira] [Updated] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2021-05-23 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-15545:
---
Target Version/s: 3.3.2

> (S)Webhdfs will not use updated delegation tokens available in the ugi after 
> the old ones expire
> 
>
> Key: HDFS-15545
> URL: https://issues.apache.org/jira/browse/HDFS-15545
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Issac Buenrostro
>Assignee: Issac Buenrostro
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15545.001.patch, HDFS-15545.002.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> WebHdfsFileSystem can select a delegation token to use from the current user 
> UGI. The token selection is sticky, and WebHdfsFileSystem will re-use it 
> every time without searching the UGI again.
> If the previous token expires, WebHdfsFileSystem will catch the exception and 
> attempt to get a new token. However, the mechanism to get a new token 
> bypasses searching for one on the UGI, so even if there is external logic 
> that has retrieved a new token, it is not possible to make the FileSystem use 
> the new, valid token, rendering the FileSystem object unusable.
> A simple fix would allow WebHdfsFileSystem to re-search the UGI, and if it 
> finds a different token than the cached one try to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16008) RBF: Tool to initialize ViewFS Mapping to Router

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16008?focusedWorklogId=600999&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600999
 ]

ASF GitHub Bot logged work on HDFS-16008:
-

Author: ASF GitHub Bot
Created on: 24/May/21 03:05
Start Date: 24/May/21 03:05
Worklog Time Spent: 10m 
  Work Description: zhuxiangyi commented on a change in pull request #2981:
URL: https://github.com/apache/hadoop/pull/2981#discussion_r637659170



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
##
@@ -1036,6 +1057,83 @@ private boolean updateQuota(String mount, long nsQuota, 
long ssQuota)
 return updateResponse.getStatus();
   }
 
+  /**
+   * Initialize the ViewFS mount point to the Router,
+   * either to specify a cluster or to initialize it all.
+   * @param clusterName The specified cluster to initialize,
+   * AllCluster was then all clusters.
+   * @return If the quota was updated.
+   * @throws IOException Error adding the mount point.
+   */
+  public boolean initViewFsToMountTable(String clusterName)
+  throws IOException {
+// fs.viewfs.mounttable.ClusterX.link./data
+final String mountTablePrefix;
+if (clusterName.equals(ALL_CLUSTERS)) {
+  mountTablePrefix =
+  Constants.CONFIG_VIEWFS_PREFIX + ".*" +
+  Constants.CONFIG_VIEWFS_LINK + ".";
+} else {
+  mountTablePrefix =
+  Constants.CONFIG_VIEWFS_PREFIX + "." + clusterName + "." +
+  Constants.CONFIG_VIEWFS_LINK + ".";
+}
+final String rootPath = "/";
+Map viewFsMap = getConf().getValByRegex(
+mountTablePrefix  + rootPath);
+if (viewFsMap.isEmpty()) {
+  System.out.println("There is no ViewFs mapping to initialize.");
+  return true;
+}
+for (Entry entry : viewFsMap.entrySet()) {
+  Path path = new Path(entry.getValue());
+  URI destUri = path.toUri();
+  String mountKey = entry.getKey();
+  DestinationOrder order = DestinationOrder.HASH;
+  String mount = mountKey.replaceAll(mountTablePrefix, "");
+  if (!destUri.getScheme().equals("hdfs")) {
+System.out.println("Only supports HDFS, " +
+"added Mount Point failed , " + mountKey);
+  }
+  if (!mount.startsWith(rootPath) ||
+  !destUri.getPath().startsWith(rootPath)) {
+System.out.println("Added Mount Point failed " + mountKey);
+continue;
+  }
+  String[] nss = new String[]{destUri.getAuthority()};
+  boolean added = addMount(
+  mount, nss, destUri.getPath(), false,
+  false, order, getACLEntityFormHdfsPath(path, getConf()));

Review comment:
   @Hexiaoqiao  I didn't find any problems here, can you tell me the 
details, thank you very much.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600999)
Time Spent: 4h 10m  (was: 4h)

> RBF: Tool to initialize ViewFS Mapping to Router
> 
>
> Key: HDFS-16008
> URL: https://issues.apache.org/jira/browse/HDFS-16008
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.3.1
>Reporter: zhu
>Assignee: zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> This is a tool for initializing ViewFS Mapping to Router.
> Some companies are currently migrating from viewfs to router, I think they 
> need this tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16028) Add a configuration item for special trash dir

2021-05-23 Thread zhuobin zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuobin zheng updated HDFS-16028:
-
Status: Open  (was: Patch Available)

> Add a configuration item for special trash dir
> --
>
> Key: HDFS-16028
> URL: https://issues.apache.org/jira/browse/HDFS-16028
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDFS-16028.001.patch, HDFS-16028.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In some situation, We don't want to put trash in homedir. like:
>  # Immediately reduce the quota occupation of the home directory
>  # In RBF:  We want to make the directory mounting strategy of trash 
> different from the home directory and we don't want mount it per user
> This patch add the option "fs.trash.dir" to special the trash dir( 
> ${fs.trash.dir}/$USER/.Trash)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16028) Add a configuration item for special trash dir

2021-05-23 Thread zhuobin zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuobin zheng updated HDFS-16028:
-
Attachment: HDFS-16028.003.patch
Status: Patch Available  (was: Open)

> Add a configuration item for special trash dir
> --
>
> Key: HDFS-16028
> URL: https://issues.apache.org/jira/browse/HDFS-16028
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDFS-16028.001.patch, HDFS-16028.002.patch, 
> HDFS-16028.003.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In some situation, We don't want to put trash in homedir. like:
>  # Immediately reduce the quota occupation of the home directory
>  # In RBF:  We want to make the directory mounting strategy of trash 
> different from the home directory and we don't want mount it per user
> This patch add the option "fs.trash.dir" to special the trash dir( 
> ${fs.trash.dir}/$USER/.Trash)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16028) Add a configuration item for special trash dir

2021-05-23 Thread zhuobin zheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17350240#comment-17350240
 ] 

zhuobin zheng commented on HDFS-16028:
--

HI [~zhuqi], thanks for your review and  advice! Submit 003 patch for resolve 
these problem.
{code:java}
1. We'd better add an enable flag to trigger this besides null check.
{code}
I think enable flag is redundant. And i don't add the flag in 003.
{code:java}
2. We should also add the new conf in core-default.xml. 
{code}
Done in 003.
{code:java}
3. We should add some doc for getTrashHome method consistent with 
getHomeDirectory.
{code}
Done in 003.

 

> Add a configuration item for special trash dir
> --
>
> Key: HDFS-16028
> URL: https://issues.apache.org/jira/browse/HDFS-16028
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDFS-16028.001.patch, HDFS-16028.002.patch, 
> HDFS-16028.003.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In some situation, We don't want to put trash in homedir. like:
>  # Immediately reduce the quota occupation of the home directory
>  # In RBF:  We want to make the directory mounting strategy of trash 
> different from the home directory and we don't want mount it per user
> This patch add the option "fs.trash.dir" to special the trash dir( 
> ${fs.trash.dir}/$USER/.Trash)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?focusedWorklogId=601007&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-601007
 ]

ASF GitHub Bot logged work on HDFS-15998:
-

Author: ASF GitHub Bot
Created on: 24/May/21 04:20
Start Date: 24/May/21 04:20
Worklog Time Spent: 10m 
  Work Description: haiyang1987 commented on pull request #3036:
URL: https://github.com/apache/hadoop/pull/3036#issuecomment-846720962


   @jojochuang Thanks for comments.   Later, I'll try to add a unit tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 601007)
Time Spent: 50m  (was: 40m)

> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
> NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> {quote}
>  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   ...
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590)
>  List open files failed.
>  listOpenFiles: java.lang.NullPointerException
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16031) Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16031?focusedWorklogId=601022&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-601022
 ]

ASF GitHub Bot logged work on HDFS-16031:
-

Author: ASF GitHub Bot
Created on: 24/May/21 06:06
Start Date: 24/May/21 06:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3027:
URL: https://github.com/apache/hadoop/pull/3027#issuecomment-846772078


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 56s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3027/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 351m  8s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3027/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 443m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.cli.TestErasureCodingCLI |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3027/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3027 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux b6609061ac3d 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU

[jira] [Work logged] (HDFS-16028) Add a configuration item for special trash dir

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16028?focusedWorklogId=601040&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-601040
 ]

ASF GitHub Bot logged work on HDFS-16028:
-

Author: ASF GitHub Bot
Created on: 24/May/21 06:54
Start Date: 24/May/21 06:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3023:
URL: https://github.com/apache/hadoop/pull/3023#issuecomment-846806097


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  18m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  9s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 225 
unchanged - 0 fixed = 226 total (was 225)  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 179m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3023 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux f38537fd1c2a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 48a439494ba7ca181237e0271f41b28ef477683b |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/3/testReport/ |
   | Ma

[jira] [Work logged] (HDFS-16031) Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap

2021-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16031?focusedWorklogId=601041&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-601041
 ]

ASF GitHub Bot logged work on HDFS-16031:
-

Author: ASF GitHub Bot
Created on: 24/May/21 06:55
Start Date: 24/May/21 06:55
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on a change in pull request #3027:
URL: https://github.com/apache/hadoop/pull/3027#discussion_r637728863



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
##
@@ -320,21 +320,18 @@ static File createSnapshot(InMemoryAliasMap aliasMap) 
throws IOException {
   private static File getCompressedAliasMap(File aliasMapDir)
   throws IOException {
 File outCompressedFile = new File(aliasMapDir.getParent(), TAR_NAME);
-BufferedOutputStream bOut = null;
-GzipCompressorOutputStream gzOut = null;
-TarArchiveOutputStream tOut = null;
-try {
-  bOut = new BufferedOutputStream(
-  Files.newOutputStream(outCompressedFile.toPath()));
-  gzOut = new GzipCompressorOutputStream(bOut);
-  tOut = new TarArchiveOutputStream(gzOut);
+
+try (BufferedOutputStream bOut = new BufferedOutputStream(
+Files.newOutputStream(outCompressedFile.toPath()));
+ GzipCompressorOutputStream gzOut = new 
GzipCompressorOutputStream(bOut);
+ TarArchiveOutputStream tOut = new TarArchiveOutputStream(gzOut)){
+
   addFileToTarGzRecursively(tOut, aliasMapDir, "", new Configuration());
-} finally {
   if (tOut != null) {
 tOut.finish();
   }

Review comment:
   Thanks for the update.
   - Before: `tOut.finish()` is called if addFileToTarGzRecursively throws an 
exception.
   - Your patch: `tOut.finish()` is not called if addFileToTarGzRecursively 
throws an exception.
   
   I think we need to have an extra try-catch clause:
   ```java:
 try {
   addFileToTarGzRecursively(tOut, aliasMapDir, "", new 
Configuration());
 } finally {
   tOut.finish();
 }
   ```
   tOut cannot be null in the try-with-resources clause so that we can remove 
the null check.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 601041)
Time Spent: 50m  (was: 40m)

> Possible Resource Leak in 
> org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap
> -
>
> Key: HDFS-16031
> URL: https://issues.apache.org/jira/browse/HDFS-16031
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Narges Shadab
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> We notice a possible resource leak in 
> [getCompressedAliasMap|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java#L320].
>  If {{finish()}} at line 334 throws an IOException, then {{tOut, gzOut}} and 
> {{bOut}} remain open since the exception isn't caught locally, and there is 
> no way for any caller to close them.
> I've submitted a pull request to fix it.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org