[hadoop] branch branch-3.2 updated: HDFS-14598. Findbugs warning caused by HDFS-12487. Contributed by He Xiaoqiao.

2019-06-24 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new e7fce21  HDFS-14598. Findbugs warning caused by HDFS-12487. 
Contributed by He Xiaoqiao.
e7fce21 is described below

commit e7fce2104f28c48c817822bed9349d44b0694699
Author: Anu Engineer 
AuthorDate: Mon Jun 24 15:34:11 2019 -0700

HDFS-14598. Findbugs warning caused by HDFS-12487.
Contributed by He Xiaoqiao.

(cherry picked from commit 041e7a7dee4a17714f31952dc6364c77a65b1b73)
---
 .../hadoop/hdfs/server/datanode/DiskBalancer.java  | 25 --
 1 file changed, 9 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
index ee64d8d..8d5660e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
@@ -906,24 +906,17 @@ public class DiskBalancer {
   if(null == block){
 LOG.info("NextBlock call returned null.No valid block to copy. {}",
 item.toJson());
-return block;
+return null;
   }
-
-  if (block != null) {
-// A valid block is a finalized block, we iterate until we get
-// finalized blocks
-if (!this.dataset.isValidBlock(block)) {
-  continue;
-}
-
-// We don't look for the best, we just do first fit
-if (isLessThanNeeded(block.getNumBytes(), item)) {
-  return block;
-}
-  } else {
-LOG.info("There are no blocks in the blockPool {}", 
iter.getBlockPoolId());
+  // A valid block is a finalized block, we iterate until we get
+  // finalized blocks
+  if (!this.dataset.isValidBlock(block)) {
+continue;
+  }
+  // We don't look for the best, we just do first fit
+  if (isLessThanNeeded(block.getNumBytes(), item)) {
+return block;
   }
-
 } catch (IOException e) {
   item.incErrorCount();
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-14598. Findbugs warning caused by HDFS-12487. Contributed by He Xiaoqiao.

2019-06-24 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 1bae5c7  HDFS-14598. Findbugs warning caused by HDFS-12487. 
Contributed by He Xiaoqiao.
1bae5c7 is described below

commit 1bae5c7024feb20eb5d6e52f54aa402fdd77a396
Author: Anu Engineer 
AuthorDate: Mon Jun 24 15:34:11 2019 -0700

HDFS-14598. Findbugs warning caused by HDFS-12487.
Contributed by He Xiaoqiao.

(cherry picked from commit 041e7a7dee4a17714f31952dc6364c77a65b1b73)
(cherry picked from commit e7fce2104f28c48c817822bed9349d44b0694699)
---
 .../hadoop/hdfs/server/datanode/DiskBalancer.java  | 25 --
 1 file changed, 9 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
index ee64d8d..8d5660e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
@@ -906,24 +906,17 @@ public class DiskBalancer {
   if(null == block){
 LOG.info("NextBlock call returned null.No valid block to copy. {}",
 item.toJson());
-return block;
+return null;
   }
-
-  if (block != null) {
-// A valid block is a finalized block, we iterate until we get
-// finalized blocks
-if (!this.dataset.isValidBlock(block)) {
-  continue;
-}
-
-// We don't look for the best, we just do first fit
-if (isLessThanNeeded(block.getNumBytes(), item)) {
-  return block;
-}
-  } else {
-LOG.info("There are no blocks in the blockPool {}", 
iter.getBlockPoolId());
+  // A valid block is a finalized block, we iterate until we get
+  // finalized blocks
+  if (!this.dataset.isValidBlock(block)) {
+continue;
+  }
+  // We don't look for the best, we just do first fit
+  if (isLessThanNeeded(block.getNumBytes(), item)) {
+return block;
   }
-
 } catch (IOException e) {
   item.incErrorCount();
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.0 updated: HDFS-14247. Repeat adding node description into network topology. Contributed by HuangTao.

2019-06-24 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 9daa45f  HDFS-14247. Repeat adding node description into network 
topology. Contributed by HuangTao.
9daa45f is described below

commit 9daa45f646d9eddda21c087c23e6d1498f98c055
Author: Inigo Goiri 
AuthorDate: Fri Mar 1 09:18:51 2019 -0800

HDFS-14247. Repeat adding node description into network topology. 
Contributed by HuangTao.

(cherry picked from commit 80b77deb42a3ef94d6bef160bc58d807f2faa104)
(cherry picked from commit 96371245357bda63b3ede10f37a37f5333a85d69)
(cherry picked from commit 90b88db35d42f2eab4da7f192a5fb99d9c834abb)
---
 .../org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java   | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 1fb27c7..61fa842 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -1130,7 +1130,6 @@ public class DatanodeManager {
   nodeDescr.setDependentHostNames(
   getNetworkDependenciesWithDefault(nodeDescr));
 }
-networktopology.add(nodeDescr);
 nodeDescr.setSoftwareVersion(nodeReg.getSoftwareVersion());
 resolveUpgradeDomain(nodeDescr);
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-14247. Repeat adding node description into network topology. Contributed by HuangTao.

2019-06-24 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 9637124  HDFS-14247. Repeat adding node description into network 
topology. Contributed by HuangTao.
9637124 is described below

commit 96371245357bda63b3ede10f37a37f5333a85d69
Author: Inigo Goiri 
AuthorDate: Fri Mar 1 09:18:51 2019 -0800

HDFS-14247. Repeat adding node description into network topology. 
Contributed by HuangTao.

(cherry picked from commit 80b77deb42a3ef94d6bef160bc58d807f2faa104)
---
 .../org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java   | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 430c0d4..59c17a8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -1134,7 +1134,6 @@ public class DatanodeManager {
   nodeDescr.setDependentHostNames(
   getNetworkDependenciesWithDefault(nodeDescr));
 }
-networktopology.add(nodeDescr);
 nodeDescr.setSoftwareVersion(nodeReg.getSoftwareVersion());
 resolveUpgradeDomain(nodeDescr);
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.8 updated: HDFS-14247. Repeat adding node description into network topology. Contributed by HuangTao.

2019-06-24 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-2.8
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.8 by this push:
 new 5a17115  HDFS-14247. Repeat adding node description into network 
topology. Contributed by HuangTao.
5a17115 is described below

commit 5a171157ef96dad0592577010aeec9308f3824f9
Author: Inigo Goiri 
AuthorDate: Fri Mar 1 09:18:51 2019 -0800

HDFS-14247. Repeat adding node description into network topology. 
Contributed by HuangTao.

(cherry picked from commit 80b77deb42a3ef94d6bef160bc58d807f2faa104)
(cherry picked from commit 96371245357bda63b3ede10f37a37f5333a85d69)
(cherry picked from commit 90b88db35d42f2eab4da7f192a5fb99d9c834abb)
(cherry picked from commit 9daa45f646d9eddda21c087c23e6d1498f98c055)
(cherry picked from commit 0272480e9fe72acbfd15be21c81113b7601e5853)
(cherry picked from commit 5a9b94bb647fc8cb9264930704cb5a357c07b1b5)
---
 .../org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java   | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 4d79e8e..2f483c0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -986,7 +986,6 @@ public class DatanodeManager {
   nodeDescr.setDependentHostNames(
   getNetworkDependenciesWithDefault(nodeDescr));
 }
-networktopology.add(nodeDescr);
 nodeDescr.setSoftwareVersion(nodeReg.getSoftwareVersion());
 resolveUpgradeDomain(nodeDescr);
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-14247. Repeat adding node description into network topology. Contributed by HuangTao.

2019-06-24 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 90b88db  HDFS-14247. Repeat adding node description into network 
topology. Contributed by HuangTao.
90b88db is described below

commit 90b88db35d42f2eab4da7f192a5fb99d9c834abb
Author: Inigo Goiri 
AuthorDate: Fri Mar 1 09:18:51 2019 -0800

HDFS-14247. Repeat adding node description into network topology. 
Contributed by HuangTao.

(cherry picked from commit 80b77deb42a3ef94d6bef160bc58d807f2faa104)
(cherry picked from commit 96371245357bda63b3ede10f37a37f5333a85d69)
---
 .../org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java   | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index b5eda97..539d898 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -1132,7 +1132,6 @@ public class DatanodeManager {
   nodeDescr.setDependentHostNames(
   getNetworkDependenciesWithDefault(nodeDescr));
 }
-networktopology.add(nodeDescr);
 nodeDescr.setSoftwareVersion(nodeReg.getSoftwareVersion());
 resolveUpgradeDomain(nodeDescr);
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.9 updated: HDFS-14247. Repeat adding node description into network topology. Contributed by HuangTao.

2019-06-24 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new 5a9b94b  HDFS-14247. Repeat adding node description into network 
topology. Contributed by HuangTao.
5a9b94b is described below

commit 5a9b94bb647fc8cb9264930704cb5a357c07b1b5
Author: Inigo Goiri 
AuthorDate: Fri Mar 1 09:18:51 2019 -0800

HDFS-14247. Repeat adding node description into network topology. 
Contributed by HuangTao.

(cherry picked from commit 80b77deb42a3ef94d6bef160bc58d807f2faa104)
(cherry picked from commit 96371245357bda63b3ede10f37a37f5333a85d69)
(cherry picked from commit 90b88db35d42f2eab4da7f192a5fb99d9c834abb)
(cherry picked from commit 9daa45f646d9eddda21c087c23e6d1498f98c055)
(cherry picked from commit 0272480e9fe72acbfd15be21c81113b7601e5853)
---
 .../org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java   | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index f9209a3..611bbb9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -1060,7 +1060,6 @@ public class DatanodeManager {
   nodeDescr.setDependentHostNames(
   getNetworkDependenciesWithDefault(nodeDescr));
 }
-networktopology.add(nodeDescr);
 nodeDescr.setSoftwareVersion(nodeReg.getSoftwareVersion());
 resolveUpgradeDomain(nodeDescr);
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated: HDFS-14247. Repeat adding node description into network topology. Contributed by HuangTao.

2019-06-24 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 0272480  HDFS-14247. Repeat adding node description into network 
topology. Contributed by HuangTao.
0272480 is described below

commit 0272480e9fe72acbfd15be21c81113b7601e5853
Author: Inigo Goiri 
AuthorDate: Fri Mar 1 09:18:51 2019 -0800

HDFS-14247. Repeat adding node description into network topology. 
Contributed by HuangTao.

(cherry picked from commit 80b77deb42a3ef94d6bef160bc58d807f2faa104)
(cherry picked from commit 96371245357bda63b3ede10f37a37f5333a85d69)
(cherry picked from commit 90b88db35d42f2eab4da7f192a5fb99d9c834abb)
(cherry picked from commit 9daa45f646d9eddda21c087c23e6d1498f98c055)
---
 .../org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java   | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 9f0e502..fe94a31 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -1060,7 +1060,6 @@ public class DatanodeManager {
   nodeDescr.setDependentHostNames(
   getNetworkDependenciesWithDefault(nodeDescr));
 }
-networktopology.add(nodeDescr);
 nodeDescr.setSoftwareVersion(nodeReg.getSoftwareVersion());
 resolveUpgradeDomain(nodeDescr);
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: HDFS-13643. Implement basic async rpc client

2019-06-24 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch HDFS-13572
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit edd741d7a7e043268bd8a559d1de73019e58a839
Author: zhangduo 
AuthorDate: Mon Jun 4 21:54:45 2018 +0800

HDFS-13643. Implement basic async rpc client
---
 .../hadoop-client-minicluster/pom.xml  |   4 +
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml |  29 +++-
 .../hdfs/ipc/BufferCallBeforeInitHandler.java  | 100 
 .../main/java/org/apache/hadoop/hdfs/ipc/Call.java | 132 
 .../org/apache/hadoop/hdfs/ipc/ConnectionId.java   |  71 +
 .../apache/hadoop/hdfs/ipc/HdfsRpcController.java  |  74 +
 .../java/org/apache/hadoop/hdfs/ipc/IPCUtil.java   |  34 
 .../java/org/apache/hadoop/hdfs/ipc/RpcClient.java | 128 +++
 .../org/apache/hadoop/hdfs/ipc/RpcConnection.java  | 153 ++
 .../apache/hadoop/hdfs/ipc/RpcDuplexHandler.java   | 175 +
 .../org/apache/hadoop/hdfs/ipc/TestAsyncIPC.java   |  88 +++
 .../apache/hadoop/hdfs/ipc/TestRpcProtocolPB.java  |  27 
 .../org/apache/hadoop/hdfs/ipc/TestServer.java |  58 +++
 .../src/test/proto/test_rpc.proto  |  35 +
 14 files changed, 1103 insertions(+), 5 deletions(-)

diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml 
b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index 918b374..594e28c 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -115,6 +115,10 @@
   netty
 
 
+  io.netty
+  netty-all
+
+
   javax.servlet
   javax.servlet-api
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
index 8769bef..863f700 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
@@ -39,6 +39,11 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   okhttp
 
 
+  io.netty
+  netty-all
+  compile
+
+
   org.apache.hadoop
   hadoop-common
   provided
@@ -64,11 +69,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   test
 
 
-  io.netty
-  netty-all
-  test
-
-
   org.mock-server
   mockserver-netty
   test
@@ -163,6 +163,25 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   
 
   
+  
+compile-test-protoc
+
+  test-protoc
+
+
+  ${protobuf.version}
+  ${protoc.path}
+  
+${basedir}/src/test/proto
+  
+  
+${basedir}/src/test/proto
+
+  test_rpc.proto
+
+  
+
+  
 
   
   
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/BufferCallBeforeInitHandler.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/BufferCallBeforeInitHandler.java
new file mode 100644
index 000..89433e9
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/BufferCallBeforeInitHandler.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import io.netty.channel.ChannelDuplexHandler;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelPromise;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.hadoop.classification.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class BufferCallBeforeInitHandler extends ChannelDuplexHandler {
+
+  private enum BufferCallAction {
+FLUSH, FAIL
+  }
+
+  public static final class BufferCallEvent {
+
+public final BufferCallAction action;
+
+public final IOException error;
+
+private BufferCallEvent(BufferCallBeforeInitHandler.BufferCallAction

[hadoop] branch trunk updated: HDFS-14598. Findbugs warning caused by HDFS-12487. Contributed by He Xiaoqiao.

2019-06-24 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 041e7a7  HDFS-14598. Findbugs warning caused by HDFS-12487. 
Contributed by He Xiaoqiao.
041e7a7 is described below

commit 041e7a7dee4a17714f31952dc6364c77a65b1b73
Author: Anu Engineer 
AuthorDate: Mon Jun 24 15:34:11 2019 -0700

HDFS-14598. Findbugs warning caused by HDFS-12487.
Contributed by He Xiaoqiao.
---
 .../hadoop/hdfs/server/datanode/DiskBalancer.java  | 25 --
 1 file changed, 9 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
index f8d4ea4..9183344 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
@@ -908,24 +908,17 @@ public class DiskBalancer {
   if(null == block){
 LOG.info("NextBlock call returned null.No valid block to copy. {}",
 item.toJson());
-return block;
+return null;
   }
-
-  if (block != null) {
-// A valid block is a finalized block, we iterate until we get
-// finalized blocks
-if (!this.dataset.isValidBlock(block)) {
-  continue;
-}
-
-// We don't look for the best, we just do first fit
-if (isLessThanNeeded(block.getNumBytes(), item)) {
-  return block;
-}
-  } else {
-LOG.info("There are no blocks in the blockPool {}", 
iter.getBlockPoolId());
+  // A valid block is a finalized block, we iterate until we get
+  // finalized blocks
+  if (!this.dataset.isValidBlock(block)) {
+continue;
+  }
+  // We don't look for the best, we just do first fit
+  if (isLessThanNeeded(block.getNumBytes(), item)) {
+return block;
   }
-
 } catch (IOException e) {
   item.incErrorCount();
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-14541. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException.

2019-06-24 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 0966407  HDFS-14541. When evictableMmapped or evictable size is zero, 
do not throw NoSuchElementException.
0966407 is described below

commit 0966407ad6fddb7adf60dba846d1f7c8a13b8ec1
Author: Inigo Goiri 
AuthorDate: Mon Jun 24 19:02:41 2019 -0700

HDFS-14541. When evictableMmapped or evictable size is zero, do not throw 
NoSuchElementException.
---
 .../hdfs/shortcircuit/ShortCircuitCache.java   | 40 ++
 1 file changed, 11 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
index aa982d0..5acac2f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
@@ -109,13 +109,8 @@ public class ShortCircuitCache implements Closeable {
 int numDemoted = demoteOldEvictableMmaped(curMs);
 int numPurged = 0;
 Long evictionTimeNs;
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictable.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictable.isEmpty()) {
+  Object eldestKey = evictable.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, 
TimeUnit.NANOSECONDS);
@@ -493,13 +488,8 @@ public class ShortCircuitCache implements Closeable {
 boolean needMoreSpace = false;
 Long evictionTimeNs;
 
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictableMmapped.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictableMmapped.isEmpty()) {
+  Object eldestKey = evictableMmapped.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, TimeUnit.NANOSECONDS);
@@ -533,23 +523,15 @@ public class ShortCircuitCache implements Closeable {
 long now = Time.monotonicNow();
 demoteOldEvictableMmaped(now);
 
-while (true) {
-  long evictableSize = evictable.size();
-  long evictableMmappedSize = evictableMmapped.size();
-  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
-return;
-  }
+while (evictable.size() + evictableMmapped.size() > maxTotalSize) {
   ShortCircuitReplica replica;
-  try {
-if (evictableSize == 0) {
-  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
-  .firstKey());
-} else {
-  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
-}
-  } catch (NoSuchElementException e) {
-break;
+  if (evictable.isEmpty()) {
+replica = (ShortCircuitReplica) evictableMmapped
+.get(evictableMmapped.firstKey());
+  } else {
+replica = (ShortCircuitReplica) evictable.get(evictable.firstKey());
   }
+
   if (LOG.isTraceEnabled()) {
 LOG.trace(this + ": trimEvictionMaps is purging " + replica +
 StringUtils.getStackTrace(Thread.currentThread()));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated: HDFS-14541. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException.

2019-06-24 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 1373889  HDFS-14541. When evictableMmapped or evictable size is zero, 
do not throw NoSuchElementException.
1373889 is described below

commit 1373889ac21e23bb9e49fd9656de1a5d132eec54
Author: Inigo Goiri 
AuthorDate: Mon Jun 24 19:02:41 2019 -0700

HDFS-14541. When evictableMmapped or evictable size is zero, do not throw 
NoSuchElementException.

(cherry picked from commit 0966407ad6fddb7adf60dba846d1f7c8a13b8ec1)
---
 .../hdfs/shortcircuit/ShortCircuitCache.java   | 40 ++
 1 file changed, 11 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
index b26652b..a92f295 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
@@ -109,13 +109,8 @@ public class ShortCircuitCache implements Closeable {
 int numDemoted = demoteOldEvictableMmaped(curMs);
 int numPurged = 0;
 Long evictionTimeNs;
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictable.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictable.isEmpty()) {
+  Object eldestKey = evictable.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, 
TimeUnit.NANOSECONDS);
@@ -493,13 +488,8 @@ public class ShortCircuitCache implements Closeable {
 boolean needMoreSpace = false;
 Long evictionTimeNs;
 
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictableMmapped.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictableMmapped.isEmpty()) {
+  Object eldestKey = evictableMmapped.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, TimeUnit.NANOSECONDS);
@@ -533,23 +523,15 @@ public class ShortCircuitCache implements Closeable {
 long now = Time.monotonicNow();
 demoteOldEvictableMmaped(now);
 
-while (true) {
-  long evictableSize = evictable.size();
-  long evictableMmappedSize = evictableMmapped.size();
-  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
-return;
-  }
+while (evictable.size() + evictableMmapped.size() > maxTotalSize) {
   ShortCircuitReplica replica;
-  try {
-if (evictableSize == 0) {
-  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
-  .firstKey());
-} else {
-  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
-}
-  } catch (NoSuchElementException e) {
-break;
+  if (evictable.isEmpty()) {
+replica = (ShortCircuitReplica) evictableMmapped
+.get(evictableMmapped.firstKey());
+  } else {
+replica = (ShortCircuitReplica) evictable.get(evictable.firstKey());
   }
+
   if (LOG.isTraceEnabled()) {
 LOG.trace(this + ": trimEvictionMaps is purging " + replica +
 StringUtils.getStackTrace(Thread.currentThread()));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-14541. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException.

2019-06-24 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new aaf74ea  HDFS-14541. When evictableMmapped or evictable size is zero, 
do not throw NoSuchElementException.
aaf74ea is described below

commit aaf74ea5d73a8445774e67bb1c58c5730a247a47
Author: Inigo Goiri 
AuthorDate: Mon Jun 24 19:02:41 2019 -0700

HDFS-14541. When evictableMmapped or evictable size is zero, do not throw 
NoSuchElementException.

(cherry picked from commit 0966407ad6fddb7adf60dba846d1f7c8a13b8ec1)
---
 .../hdfs/shortcircuit/ShortCircuitCache.java   | 40 ++
 1 file changed, 11 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
index c2f0350..d91dd7d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
@@ -109,13 +109,8 @@ public class ShortCircuitCache implements Closeable {
 int numDemoted = demoteOldEvictableMmaped(curMs);
 int numPurged = 0;
 Long evictionTimeNs;
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictable.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictable.isEmpty()) {
+  Object eldestKey = evictable.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, 
TimeUnit.NANOSECONDS);
@@ -493,13 +488,8 @@ public class ShortCircuitCache implements Closeable {
 boolean needMoreSpace = false;
 Long evictionTimeNs;
 
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictableMmapped.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictableMmapped.isEmpty()) {
+  Object eldestKey = evictableMmapped.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, TimeUnit.NANOSECONDS);
@@ -533,23 +523,15 @@ public class ShortCircuitCache implements Closeable {
 long now = Time.monotonicNow();
 demoteOldEvictableMmaped(now);
 
-while (true) {
-  long evictableSize = evictable.size();
-  long evictableMmappedSize = evictableMmapped.size();
-  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
-return;
-  }
+while (evictable.size() + evictableMmapped.size() > maxTotalSize) {
   ShortCircuitReplica replica;
-  try {
-if (evictableSize == 0) {
-  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
-  .firstKey());
-} else {
-  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
-}
-  } catch (NoSuchElementException e) {
-break;
+  if (evictable.isEmpty()) {
+replica = (ShortCircuitReplica) evictableMmapped
+.get(evictableMmapped.firstKey());
+  } else {
+replica = (ShortCircuitReplica) evictable.get(evictable.firstKey());
   }
+
   if (LOG.isTraceEnabled()) {
 LOG.trace(this + ": trimEvictionMaps is purging " + replica +
 StringUtils.getStackTrace(Thread.currentThread()));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.0 updated: HDFS-14541. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException.

2019-06-24 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 9aae8b0  HDFS-14541. When evictableMmapped or evictable size is zero, 
do not throw NoSuchElementException.
9aae8b0 is described below

commit 9aae8b05a76cdb7c13a886b4bafe5b99c7799d36
Author: Inigo Goiri 
AuthorDate: Mon Jun 24 19:02:41 2019 -0700

HDFS-14541. When evictableMmapped or evictable size is zero, do not throw 
NoSuchElementException.

(cherry picked from commit 0966407ad6fddb7adf60dba846d1f7c8a13b8ec1)
---
 .../hdfs/shortcircuit/ShortCircuitCache.java   | 40 ++
 1 file changed, 11 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
index c2f0350..d91dd7d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
@@ -109,13 +109,8 @@ public class ShortCircuitCache implements Closeable {
 int numDemoted = demoteOldEvictableMmaped(curMs);
 int numPurged = 0;
 Long evictionTimeNs;
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictable.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictable.isEmpty()) {
+  Object eldestKey = evictable.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, 
TimeUnit.NANOSECONDS);
@@ -493,13 +488,8 @@ public class ShortCircuitCache implements Closeable {
 boolean needMoreSpace = false;
 Long evictionTimeNs;
 
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictableMmapped.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictableMmapped.isEmpty()) {
+  Object eldestKey = evictableMmapped.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, TimeUnit.NANOSECONDS);
@@ -533,23 +523,15 @@ public class ShortCircuitCache implements Closeable {
 long now = Time.monotonicNow();
 demoteOldEvictableMmaped(now);
 
-while (true) {
-  long evictableSize = evictable.size();
-  long evictableMmappedSize = evictableMmapped.size();
-  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
-return;
-  }
+while (evictable.size() + evictableMmapped.size() > maxTotalSize) {
   ShortCircuitReplica replica;
-  try {
-if (evictableSize == 0) {
-  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
-  .firstKey());
-} else {
-  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
-}
-  } catch (NoSuchElementException e) {
-break;
+  if (evictable.isEmpty()) {
+replica = (ShortCircuitReplica) evictableMmapped
+.get(evictableMmapped.firstKey());
+  } else {
+replica = (ShortCircuitReplica) evictable.get(evictable.firstKey());
   }
+
   if (LOG.isTraceEnabled()) {
 LOG.trace(this + ": trimEvictionMaps is purging " + replica +
 StringUtils.getStackTrace(Thread.currentThread()));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.9 updated: HDFS-14541. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException.

2019-06-24 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new c2af516  HDFS-14541. When evictableMmapped or evictable size is zero, 
do not throw NoSuchElementException.
c2af516 is described below

commit c2af516a9764ce30d826f5cc9f488814c1f3bd8b
Author: Inigo Goiri 
AuthorDate: Mon Jun 24 19:02:41 2019 -0700

HDFS-14541. When evictableMmapped or evictable size is zero, do not throw 
NoSuchElementException.

(cherry picked from commit 0966407ad6fddb7adf60dba846d1f7c8a13b8ec1)
---
 .../hdfs/shortcircuit/ShortCircuitCache.java   | 40 ++
 1 file changed, 11 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
index bd02a97..77e85cb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
@@ -109,13 +109,8 @@ public class ShortCircuitCache implements Closeable {
 int numDemoted = demoteOldEvictableMmaped(curMs);
 int numPurged = 0;
 Long evictionTimeNs;
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictable.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictable.isEmpty()) {
+  Object eldestKey = evictable.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, 
TimeUnit.NANOSECONDS);
@@ -488,13 +483,8 @@ public class ShortCircuitCache implements Closeable {
 boolean needMoreSpace = false;
 Long evictionTimeNs;
 
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictableMmapped.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictableMmapped.isEmpty()) {
+  Object eldestKey = evictableMmapped.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, TimeUnit.NANOSECONDS);
@@ -528,23 +518,15 @@ public class ShortCircuitCache implements Closeable {
 long now = Time.monotonicNow();
 demoteOldEvictableMmaped(now);
 
-while (true) {
-  long evictableSize = evictable.size();
-  long evictableMmappedSize = evictableMmapped.size();
-  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
-return;
-  }
+while (evictable.size() + evictableMmapped.size() > maxTotalSize) {
   ShortCircuitReplica replica;
-  try {
-if (evictableSize == 0) {
-  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
-  .firstKey());
-} else {
-  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
-}
-  } catch (NoSuchElementException e) {
-break;
+  if (evictable.isEmpty()) {
+replica = (ShortCircuitReplica) evictableMmapped
+.get(evictableMmapped.firstKey());
+  } else {
+replica = (ShortCircuitReplica) evictable.get(evictable.firstKey());
   }
+
   if (LOG.isTraceEnabled()) {
 LOG.trace(this + ": trimEvictionMaps is purging " + replica +
 StringUtils.getStackTrace(Thread.currentThread()));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-13371. NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.X. Contributed by Sherwood Zheng.

2019-06-24 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b76b843  HDFS-13371. NPE for FsServerDefaults.getKeyProviderUri() for 
clientProtocol communication between 2.7 and 3.X. Contributed by Sherwood Zheng.
b76b843 is described below

commit b76b843c8bd3906aaa5ad633d8a939aebc671907
Author: Inigo Goiri 
AuthorDate: Mon Jun 24 17:52:33 2019 -0700

HDFS-13371. NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol 
communication between 2.7 and 3.X. Contributed by Sherwood Zheng.
---
 .../java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
index 829d3ef..b0eb99c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
@@ -2265,7 +2265,7 @@ public class PBHelperClient {
 
   public static FsServerDefaultsProto convert(FsServerDefaults fs) {
 if (fs == null) return null;
-return FsServerDefaultsProto.newBuilder().
+FsServerDefaultsProto.Builder builder = FsServerDefaultsProto.newBuilder().
 setBlockSize(fs.getBlockSize()).
 setBytesPerChecksum(fs.getBytesPerChecksum()).
 setWritePacketSize(fs.getWritePacketSize())
@@ -2274,9 +2274,11 @@ public class PBHelperClient {
 .setEncryptDataTransfer(fs.getEncryptDataTransfer())
 .setTrashInterval(fs.getTrashInterval())
 .setChecksumType(convert(fs.getChecksumType()))
-.setKeyProviderUri(fs.getKeyProviderUri())
-.setPolicyId(fs.getDefaultStoragePolicyId())
-.build();
+.setPolicyId(fs.getDefaultStoragePolicyId());
+if (fs.getKeyProviderUri() != null) {
+  builder.setKeyProviderUri(fs.getKeyProviderUri());
+}
+return builder.build();
   }
 
   public static EnumSetWritable convertCreateFlag(int flag) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: when evictableMmapped or evictable size is zero, do not throw NoSuchElementException

2019-06-24 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new daa1e14  when evictableMmapped or evictable size is zero, do not throw 
NoSuchElementException
 new 38a560c  Merge pull request #977 from leosunli/trunk
daa1e14 is described below

commit daa1e147454c3f14704aca904aaae168b47f3de0
Author: sunlisheng 
AuthorDate: Mon Jun 17 19:32:41 2019 +0800

when evictableMmapped or evictable size is zero, do not throw 
NoSuchElementException

Signed-off-by: sunlisheng 
---
 .../hdfs/shortcircuit/ShortCircuitCache.java   | 40 ++
 1 file changed, 11 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
index c2f0350..d91dd7d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
@@ -109,13 +109,8 @@ public class ShortCircuitCache implements Closeable {
 int numDemoted = demoteOldEvictableMmaped(curMs);
 int numPurged = 0;
 Long evictionTimeNs;
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictable.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictable.isEmpty()) {
+  Object eldestKey = evictable.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, 
TimeUnit.NANOSECONDS);
@@ -493,13 +488,8 @@ public class ShortCircuitCache implements Closeable {
 boolean needMoreSpace = false;
 Long evictionTimeNs;
 
-while (true) {
-  Object eldestKey;
-  try {
-eldestKey = evictableMmapped.firstKey();
-  } catch (NoSuchElementException e) {
-break;
-  }
+while (!evictableMmapped.isEmpty()) {
+  Object eldestKey = evictableMmapped.firstKey();
   evictionTimeNs = (Long)eldestKey;
   long evictionTimeMs =
   TimeUnit.MILLISECONDS.convert(evictionTimeNs, TimeUnit.NANOSECONDS);
@@ -533,23 +523,15 @@ public class ShortCircuitCache implements Closeable {
 long now = Time.monotonicNow();
 demoteOldEvictableMmaped(now);
 
-while (true) {
-  long evictableSize = evictable.size();
-  long evictableMmappedSize = evictableMmapped.size();
-  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
-return;
-  }
+while (evictable.size() + evictableMmapped.size() > maxTotalSize) {
   ShortCircuitReplica replica;
-  try {
-if (evictableSize == 0) {
-  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
-  .firstKey());
-} else {
-  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
-}
-  } catch (NoSuchElementException e) {
-break;
+  if (evictable.isEmpty()) {
+replica = (ShortCircuitReplica) evictableMmapped
+.get(evictableMmapped.firstKey());
+  } else {
+replica = (ShortCircuitReplica) evictable.get(evictable.firstKey());
   }
+
   if (LOG.isTraceEnabled()) {
 LOG.trace(this + ": trimEvictionMaps is purging " + replica +
 StringUtils.getStackTrace(Thread.currentThread()));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14403. Cost-based extension to the RPC Fair Call Queue. Contributed by Christopher Gregorian.

2019-06-24 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 129576f  HDFS-14403. Cost-based extension to the RPC Fair Call Queue. 
Contributed by Christopher Gregorian.
129576f is described below

commit 129576f628d370def74e56112aba3a93e97bbf70
Author: Christopher Gregorian 
AuthorDate: Fri May 24 17:09:52 2019 -0700

HDFS-14403. Cost-based extension to the RPC Fair Call Queue. Contributed by 
Christopher Gregorian.
---
 .../apache/hadoop/fs/CommonConfigurationKeys.java  |   1 +
 .../org/apache/hadoop/ipc/CallQueueManager.java|   1 -
 .../java/org/apache/hadoop/ipc/CostProvider.java   |  46 
 .../org/apache/hadoop/ipc/DecayRpcScheduler.java   | 249 -
 .../org/apache/hadoop/ipc/DefaultCostProvider.java |  43 
 .../hadoop/ipc/WeightedTimeCostProvider.java   | 110 +
 .../src/site/markdown/FairCallQueue.md |  19 ++
 .../apache/hadoop/ipc/TestDecayRpcScheduler.java   | 174 +++---
 .../test/java/org/apache/hadoop/ipc/TestRPC.java   | 102 +
 .../hadoop/ipc/TestWeightedTimeCostProvider.java   |  86 +++
 10 files changed, 639 insertions(+), 192 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index 958113c..876d0ad 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -106,6 +106,7 @@ public class CommonConfigurationKeys extends 
CommonConfigurationKeysPublic {
   public static final String IPC_CALLQUEUE_IMPL_KEY = "callqueue.impl";
   public static final String IPC_SCHEDULER_IMPL_KEY = "scheduler.impl";
   public static final String IPC_IDENTITY_PROVIDER_KEY = 
"identity-provider.impl";
+  public static final String IPC_COST_PROVIDER_KEY = "cost-provider.impl";
   public static final String IPC_BACKOFF_ENABLE = "backoff.enable";
   public static final boolean IPC_BACKOFF_ENABLE_DEFAULT = false;
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
index e18f307..0287656 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
@@ -198,7 +198,6 @@ public class CallQueueManager
   }
 
   // This should be only called once per call and cached in the call object
-  // each getPriorityLevel call will increment the counter for the caller
   int getPriorityLevel(Schedulable e) {
 return scheduler.getPriorityLevel(e);
   }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CostProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CostProvider.java
new file mode 100644
index 000..cf76e7d
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CostProvider.java
@@ -0,0 +1,46 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ipc;
+
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * Used by {@link DecayRpcScheduler} to get the cost of users' operations. This
+ * is configurable using
+ * {@link org.apache.hadoop.fs.CommonConfigurationKeys#IPC_COST_PROVIDER_KEY}.
+ */
+public interface CostProvider {
+
+  /**
+   * Initialize this provider using the given configuration, examining only
+   * ones which fall within the provided namespace.
+   *
+   * @param namespace The namespace to use when looking up configurations.
+   * @param conf The configuration
+   */
+  void init(String namespace, Configuration conf);
+
+  /**
+   * Get cost from {@link ProcessingDetails} which will be used in scheduler.
+   *
+   

[hadoop] branch trunk updated: HDDS-1646. Support real persistence in the k8s example files (#945)

2019-06-24 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d023f1f  HDDS-1646. Support real persistence in the k8s example files  
(#945)
d023f1f is described below

commit d023f1f8640994d5233c27316a036c746c2f8429
Author: Elek, Márton 
AuthorDate: Mon Jun 24 21:02:35 2019 +0200

HDDS-1646. Support real persistence in the k8s example files  (#945)

* HDDS-1646. Support real persistence in the k8s example files

* ephemeral clusters can be scaled up
---
 .../ozone/datanode-ss-service.yaml}|  8 ++--
 .../ozone/{datanode-ds.yaml => datanode-ss.yaml}   | 29 +
 .../{transformations => definitions}/emptydir.yaml | 20 ++---
 .../emptydir.yaml => definitions/persistence.yaml} | 50 +-
 .../dist/src/main/k8s/definitions/ozone/om-ss.yaml |  2 +
 .../src/main/k8s/definitions/ozone/scm-ss.yaml |  2 +
 .../flekszible.yaml => getting-started/Flekszible} |  4 +-
 .../LICENSE.header}| 10 -
 .../config-configmap.yaml} | 23 +++---
 .../datanode-public-service.yaml}  |  7 ++-
 .../datanode-service.yaml  |  3 ++
 .../datanode-statefulset.yaml  | 28 +++-
 .../getting-started/freon/freon-deployment.yaml}   | 31 --
 .../om-public-service.yaml}|  9 ++--
 .../om-service.yaml}   |  7 ++-
 .../{ozone => getting-started}/om-statefulset.yaml |  2 +
 .../s3g-public-service.yaml}   |  9 ++--
 .../s3g-service.yaml}  |  7 ++-
 .../s3g-statefulset.yaml}  | 22 --
 .../scm-public-service.yaml}   |  9 ++--
 .../scm-service.yaml}  |  7 ++-
 .../scm-statefulset.yaml   |  2 +
 .../{flekszible/flekszible.yaml => Flekszible} |  4 +-
 ...e-service.yaml => datanode-public-service.yaml} |  7 ++-
 .../k8s/examples/minikube/datanode-service.yaml|  3 ++
 .../examples/minikube/datanode-statefulset.yaml| 28 +++-
 .../main/k8s/examples/minikube/om-statefulset.yaml |  8 
 .../k8s/examples/minikube/s3g-statefulset.yaml |  1 -
 .../k8s/examples/minikube/scm-statefulset.yaml |  2 +
 .../{flekszible/flekszible.yaml => Flekszible} |  5 ++-
 .../datanode-public-service.yaml}  |  7 ++-
 .../{minikube => ozone-dev}/datanode-service.yaml  |  3 ++
 ...de-daemonset.yaml => datanode-statefulset.yaml} | 34 +--
 .../k8s/examples/ozone-dev/om-statefulset.yaml |  2 +
 .../k8s/examples/ozone-dev/s3g-statefulset.yaml|  7 ++-
 .../k8s/examples/ozone-dev/scm-statefulset.yaml|  8 ++--
 .../{flekszible/flekszible.yaml => Flekszible} |  3 +-
 .../{minikube => ozone}/datanode-service.yaml  |  3 ++
 .../{minikube => ozone}/datanode-statefulset.yaml  | 40 +++--
 .../main/k8s/examples/ozone/om-statefulset.yaml| 15 +--
 .../main/k8s/examples/ozone/s3g-statefulset.yaml   | 13 --
 .../main/k8s/examples/ozone/scm-statefulset.yaml   | 14 --
 42 files changed, 313 insertions(+), 185 deletions(-)

diff --git 
a/hadoop-ozone/dist/src/main/k8s/examples/minikube/datanode-service.yaml 
b/hadoop-ozone/dist/src/main/k8s/definitions/ozone/datanode-ss-service.yaml
similarity index 91%
copy from hadoop-ozone/dist/src/main/k8s/examples/minikube/datanode-service.yaml
copy to 
hadoop-ozone/dist/src/main/k8s/definitions/ozone/datanode-ss-service.yaml
index 0e5927d..7c221d9 100644
--- a/hadoop-ozone/dist/src/main/k8s/examples/minikube/datanode-service.yaml
+++ b/hadoop-ozone/dist/src/main/k8s/definitions/ozone/datanode-ss-service.yaml
@@ -13,13 +13,15 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-
 apiVersion: v1
 kind: Service
 metadata:
   name: datanode
 spec:
+  ports:
+  - port: 9870
+name: rpc
   clusterIP: None
   selector:
-app: ozone
-component: datanode
+ app: ozone
+ component: datanode
diff --git a/hadoop-ozone/dist/src/main/k8s/definitions/ozone/datanode-ds.yaml 
b/hadoop-ozone/dist/src/main/k8s/definitions/ozone/datanode-ss.yaml
similarity index 64%
copy from hadoop-ozone/dist/src/main/k8s/definitions/ozone/datanode-ds.yaml
copy to hadoop-ozone/dist/src/main/k8s/definitions/ozone/datanode-ss.yaml
index fbc340c..94dc570 100644
--- a/hadoop-ozone/dist/src/main/k8s/definitions/ozone/datanode-ds.yaml
+++ b/hadoop-ozone/dist/src/main/k8s/definitions/ozone/datanode-ss.yaml
@@ -3,7 +3,7 @@
 # distributed with this work for additional information
 # regarding copyright ownership.  The ASF licenses this file
 # to you under the Ap

[hadoop] branch trunk updated: HADOOP-16350. Ability to tell HDFS client not to request KMS Information from NameNode. Ccontributed by Greg Senia, Ajay Kumar.

2019-06-24 Thread ajay
This is an automated email from the ASF dual-hosted git repository.

ajay pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 95c94dc  HADOOP-16350. Ability to tell HDFS client not to request KMS 
Information from NameNode. Ccontributed  by Greg Senia, Ajay Kumar.
95c94dc is described below

commit 95c94dcca71a41e56a4c2989cf2aefdaf9923e13
Author: Ajay Kumar 
AuthorDate: Mon Jun 24 11:38:43 2019 -0700

HADOOP-16350. Ability to tell HDFS client not to request KMS Information 
from NameNode. Ccontributed  by Greg Senia, Ajay Kumar.
---
 .../org/apache/hadoop/fs/CommonConfigurationKeys.java  | 13 +
 .../hadoop-common/src/main/resources/core-default.xml  | 14 ++
 .../main/java/org/apache/hadoop/hdfs/HdfsKMSUtil.java  | 18 ++
 .../org/apache/hadoop/hdfs/TestEncryptionZones.java| 11 +++
 4 files changed, 52 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index 2e6b132..958113c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -403,4 +403,17 @@ public class CommonConfigurationKeys extends 
CommonConfigurationKeysPublic {
   public static final Class
   HADOOP_DOMAINNAME_RESOLVER_IMPL_DEFAULT =
   DNSDomainNameResolver.class;
+  /*
+   *  Ignore KMS default URI returned from NameNode.
+   *  When set to true, kms uri is searched in the following order:
+   *  1. If there is a mapping in Credential's secrets map for namenode uri.
+   *  2. Fallback to local conf.
+   *  If client choose to ignore KMS uri provided by NameNode then client
+   *  should set KMS URI using 'hadoop.security.key.provider.path' to access
+   *  the right KMS for encrypted files.
+   * */
+  public static final String DFS_CLIENT_IGNORE_NAMENODE_DEFAULT_KMS_URI =
+  "dfs.client.ignore.namenode.default.kms.uri";
+  public static final boolean
+  DFS_CLIENT_IGNORE_NAMENODE_DEFAULT_KMS_URI_DEFAULT = false;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 4e22e0a..5ae60d7 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -3480,4 +3480,18 @@
   with the input domain name of the services by querying the underlying 
DNS.
 
   
+
+  
+dfs.client.ignore.namenode.default.kms.uri
+false
+
+  Ignore KMS default URI returned from NameNode.
+  When set to true, kms uri is searched in the following order:
+  1. If there is a mapping in Credential's secrets map for namenode uri.
+  2. Fallback to local conf. (i.e hadoop.security.key.provider.path)
+  If client choose to ignore KMS uri provided by NameNode then client
+  should set KMS URI using 'hadoop.security.key.provider.path' to access
+  the right KMS for encrypted files.
+
+  
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsKMSUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsKMSUtil.java
index 30e8aa7..d35b23f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsKMSUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsKMSUtil.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hdfs;
 
+import static 
org.apache.hadoop.fs.CommonConfigurationKeys.DFS_CLIENT_IGNORE_NAMENODE_DEFAULT_KMS_URI;
+import static 
org.apache.hadoop.fs.CommonConfigurationKeys.DFS_CLIENT_IGNORE_NAMENODE_DEFAULT_KMS_URI_DEFAULT;
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_CRYPTO_CODEC_CLASSES_KEY_PREFIX;
 
 import java.io.IOException;
@@ -141,11 +143,19 @@ public final class HdfsKMSUtil {
   URI.create(DFSUtilClient.bytes2String(keyProviderUriBytes));
 }
 if (keyProviderUri == null) {
-  // NN is old and doesn't report provider, so use conf.
-  if (keyProviderUriStr == null) {
+  // Check if NN provided uri is not null and ignore property is false.
+  if (keyProviderUriStr != null && !conf.getBoolean(
+  DFS_CLIENT_IGNORE_NAMENODE_DEFAULT_KMS_URI,
+  DFS_CLIENT_IGNORE_NAMENODE_DEFAULT_KMS_URI_DEFAULT)) {
+if (!keyProviderUriStr.isEmpty()) {
+  keyProviderUri = URI.create(keyProviderUriStr);
+}
+  }
+  // Fallback to configuration.
+  if (keyProviderUri == null) {
+// Either NN is old and

[hadoop] branch trunk updated: HDDS-1597. Remove hdds-server-scm dependency from ozone-common (#969)

2019-06-24 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0042079  HDDS-1597. Remove hdds-server-scm dependency from 
ozone-common (#969)
0042079 is described below

commit 004207996c6610d67d1a202da5e51d7bc7890484
Author: Elek, Márton 
AuthorDate: Mon Jun 24 20:56:04 2019 +0200

HDDS-1597. Remove hdds-server-scm dependency from ozone-common (#969)

* HDDS-1597. Remove hdds-server-scm dependency from ozone-common. 
Contributed by Elek, Marton.

* checkstyle fixes

* revert the import reorder of HddsUtil

* add javadoc

* switch back to the commons-lang2

* fix typo

* fix metrics core classpath problem (+rebase fix)
---
 .../hadoop/hdds/scm/exceptions/SCMException.java|  0
 .../hadoop/hdds/scm/exceptions/package-info.java|  0
 hadoop-hdds/container-service/pom.xml   | 10 --
 .../org/apache/hadoop/hdds/server/ServerUtils.java  | 21 +
 hadoop-hdds/server-scm/pom.xml  |  4 
 .../java/org/apache/hadoop/hdds/scm/ScmUtils.java   | 21 ++---
 hadoop-ozone/common/pom.xml |  4 
 .../main/java/org/apache/hadoop/ozone/OmUtils.java  |  4 ++--
 hadoop-ozone/integration-test/pom.xml   |  9 +
 .../apache/hadoop/ozone/om/TestKeyManagerImpl.java  |  0
 hadoop-ozone/pom.xml|  5 +
 hadoop-ozone/tools/pom.xml  |  5 +
 12 files changed, 52 insertions(+), 31 deletions(-)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/exceptions/SCMException.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/exceptions/SCMException.java
similarity index 100%
rename from 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/exceptions/SCMException.java
rename to 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/exceptions/SCMException.java
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/exceptions/package-info.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/exceptions/package-info.java
similarity index 100%
rename from 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/exceptions/package-info.java
rename to 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/exceptions/package-info.java
diff --git a/hadoop-hdds/container-service/pom.xml 
b/hadoop-hdds/container-service/pom.xml
index c1dd403..2f89fa2 100644
--- a/hadoop-hdds/container-service/pom.xml
+++ b/hadoop-hdds/container-service/pom.xml
@@ -37,6 +37,10 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   org.apache.hadoop
   hadoop-hdds-server-framework
 
+
+  io.dropwizard.metrics
+  metrics-core
+
 
 
   org.mockito
@@ -56,12 +60,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   3.0.1
   provided
 
-
-
-  io.dropwizard.metrics
-  metrics-core
-  test
-
   
 
   
diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java
index f775ca1..33a1ca9 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java
@@ -203,4 +203,25 @@ public final class ServerUtils {
 conf.set(HddsConfigKeys.OZONE_METADATA_DIRS, path);
   }
 
+  /**
+   * Returns with the service specific metadata directory.
+   * 
+   * If the directory is missing the method tries to create it.
+   *
+   * @param conf The ozone configuration object
+   * @param key The configuration key which specify the directory.
+   * @return The path of the directory.
+   */
+  public static File getDBPath(Configuration conf, String key) {
+final File dbDirPath =
+getDirectoryFromConfig(conf, key, "OM");
+if (dbDirPath != null) {
+  return dbDirPath;
+}
+
+LOG.warn("{} is not configured. We recommend adding this setting. "
++ "Falling back to {} instead.", key,
+HddsConfigKeys.OZONE_METADATA_DIRS);
+return ServerUtils.getOzoneMetaDirPath(conf);
+  }
 }
diff --git a/hadoop-hdds/server-scm/pom.xml b/hadoop-hdds/server-scm/pom.xml
index 99d5922..60b1b44 100644
--- a/hadoop-hdds/server-scm/pom.xml
+++ b/hadoop-hdds/server-scm/pom.xml
@@ -101,6 +101,10 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   bcprov-jdk15on
 
 
+  io.dropwizard.metrics
+  metrics-core
+
+
   com.google.code.findbugs
   findbugs
   provided
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ScmUtils.java 
b/hadoop-hdds/server-scm/src/main/java/o

[hadoop] branch trunk updated (b220ec6 -> 719d57b)

2019-06-24 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from b220ec6  YARN-9374.  Improve Timeline service resilience when HBase is 
unavailable. Contributed by Prabhu Joseph and Szilard Nemeth
 new 41c94a6  HDFS-13906. RBF: Add multiple paths for dfsrouteradmin 'rm' 
and 'clrquota' commands. Contributed by Ayush Saxena.
 new b3fee1d  HDFS-14011. RBF: Add more information to HdfsFileStatus for a 
mount point. Contributed by Akira Ajisaka.
 new c5065bf  HDFS-13845. RBF: The default MountTableResolver should fail 
resolving multi-destination paths. Contributed by yanghuafeng.
 new 7b0bc49  HDFS-14024. RBF: ProvidedCapacityTotal json exception in 
NamenodeHeartbeatService. Contributed by CR Hota.
 new 6f2c871  HDFS-12284. RBF: Support for Kerberos authentication. 
Contributed by Sherwood Zheng and Inigo Goiri.
 new ebfd2d8  HDFS-12284. addendum to HDFS-12284. Contributed by Inigo 
Goiri.
 new 04caaba  HDFS-13852. RBF: The DN_REPORT_TIME_OUT and 
DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys. Contributed by 
yanghuafeng.
 new fa55eac  HDFS-13834. RBF: Connection creator thread should catch 
Throwable. Contributed by CR Hota.
 new f4bd111  HDFS-14082. RBF: Add option to fail operations when a 
subcluster is unavailable. Contributed by Inigo Goiri.
 new f2355c7  HDFS-13776. RBF: Add Storage policies related ClientProtocol 
APIs. Contributed by Dibyendu Karmakar.
 new 19088e1  HDFS-14089. RBF: Failed to specify server's Kerberos pricipal 
name in NamenodeHeartbeatService. Contributed by Ranith Sardar.
 new b320cae  HDFS-14085. RBF: LS command for root shows wrong owner and 
permission information. Contributed by Ayush Saxena.
 new 6aa7aab  HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. 
Contributed by Fei Hui.
 new 0ca7142  Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be 
configurable. Contributed by Fei Hui."
 new 94a8dec  HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. 
Contributed by Fei Hui.
 new 01b4126  HDFS-14152. RBF: Fix a typo in RouterAdmin usage. Contributed 
by Ayush Saxena.
 new bbe8591  HDFS-13869. RBF: Handle NPE for 
NamenodeBeanMetrics#getFederationMetrics. Contributed by Ranith Sardar.
 new 3d97142  HDFS-14151. RBF: Make the read-only column of Mount Table 
clearly understandable.
 new 8f6f9d9  HDFS-13443. RBF: Update mount table cache immediately after 
changing (add/update/remove) mount table entries. Contributed by Mohammad 
Arshad.
 new 1dc01e5  HDFS-14167. RBF: Add stale nodes to federation metrics. 
Contributed by Inigo Goiri.
 new f3cbf0e  HDFS-14161. RBF: Throw StandbyException instead of 
IOException so that client can retry when can not get connection. Contributed 
by Fei Hui.
 new 4244653  HDFS-14150. RBF: Quotas of the sub-cluster should be removed 
when removing the mount point. Contributed by Takanobu Asanuma.
 new b8bcbd0  HDFS-14191. RBF: Remove hard coded router status from 
FederationMetrics. Contributed by Ranith Sardar.
 new f4e2bfc  HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin 
-refreshRouterArgs command. Contributed by yanghuafeng.
 new 221f24c  HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo 
Goiri.
 new f40e10b  HDFS-14129. RBF: Create new policy provider for router. 
Contributed by Ranith Sardar.
 new 7b61cbf  HDFS-14129. addendum to HDFS-14129. Contributed by Ranith 
Sardar.
 new c012b09  HDFS-14193. RBF: Inconsistency with the Default Namespace. 
Contributed by Ayush Saxena.
 new 235406d  HDFS-14156. RBF: rollEdit() command fails with Router. 
Contributed by Shubham Dewan.
 new 020f83f  HDFS-14209. RBF: setQuota() through router is working for 
only the mount Points under the Source column in MountTable. Contributed by 
Shubham Dewan.
 new 8b9b58b  HDFS-14223. RBF: Add configuration documents for using 
multiple sub-clusters. Contributed by Takanobu Asanuma.
 new acdf911  HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() 
in case of multiple destinations. Contributed by Ayush Saxena.
 new 9eed3a4  HDFS-14215. RBF: Remove dependency on availability of default 
namespace. Contributed by Ayush Saxena.
 new 559cb11  HDFS-13404. RBF: 
TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails.
 new 9c4e556  HDFS-14225. RBF : MiniRouterDFSCluster should configure the 
failover proxy provider for namespace. Contributed by Ranith Sardar.
 new 912b90f  HDFS-14252. RBF : Exceptions are exposing the actual sub 
cluster path. Contributed by Ayush Saxena.
 new 7e63e37  HDFS-14230. RBF: Throw RetriableException instead of 
IOException when no namenodes available. Contributed by Fei Hui.
 new 75f8b6c  HDFS-13358. RBF: Support for Delegation Token (RPC). 
Contributed by CR Hota.
 new e2a3c44  HDFS-14226. RBF: Setting

[hadoop] branch trunk updated: YARN-9374. Improve Timeline service resilience when HBase is unavailable. Contributed by Prabhu Joseph and Szilard Nemeth

2019-06-24 Thread eyang
This is an automated email from the ASF dual-hosted git repository.

eyang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b220ec6  YARN-9374.  Improve Timeline service resilience when HBase is 
unavailable. Contributed by Prabhu Joseph and Szilard Nemeth
b220ec6 is described below

commit b220ec6f613dca4542e256008b1be2689c67bb03
Author: Eric Yang 
AuthorDate: Mon Jun 24 12:19:14 2019 -0400

YARN-9374.  Improve Timeline service resilience when HBase is unavailable.
Contributed by Prabhu Joseph and Szilard Nemeth
---
 .../storage/TestTimelineReaderHBaseDown.java   |  18 +++-
 .../storage/TestTimelineWriterHBaseDown.java   | 117 +
 .../storage/HBaseTimelineReaderImpl.java   |  13 +--
 .../storage/HBaseTimelineWriterImpl.java   |  19 +++-
 .../storage/TimelineStorageMonitor.java|   4 -
 5 files changed, 158 insertions(+), 13 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineReaderHBaseDown.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineReaderHBaseDown.java
index e738d39..1148b80 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineReaderHBaseDown.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineReaderHBaseDown.java
@@ -150,7 +150,14 @@ public class TestTimelineReaderHBaseDown {
   waitForHBaseDown(htr);
 
   util.startMiniHBaseCluster(1, 1);
-  GenericTestUtils.waitFor(() -> !htr.isHBaseDown(), 1000, 15);
+  GenericTestUtils.waitFor(() -> {
+try {
+  htr.getTimelineStorageMonitor().checkStorageIsUp();
+  return true;
+} catch (IOException e) {
+  return false;
+}
+  }, 1000, 15);
 } finally {
   util.shutdownMiniCluster();
 }
@@ -158,8 +165,15 @@ public class TestTimelineReaderHBaseDown {
 
   private static void waitForHBaseDown(HBaseTimelineReaderImpl htr) throws
   TimeoutException, InterruptedException {
-GenericTestUtils.waitFor(() -> htr.isHBaseDown(), 1000, 15);
 try {
+  GenericTestUtils.waitFor(() -> {
+try {
+  htr.getTimelineStorageMonitor().checkStorageIsUp();
+  return false;
+} catch (IOException e) {
+  return true;
+}
+  }, 1000, 15);
   checkQuery(htr);
   Assert.fail("Query should fail when HBase is down");
 } catch (IOException e) {
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineWriterHBaseDown.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineWriterHBaseDown.java
new file mode 100644
index 000..cb89ba4
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineWriterHBaseDown.java
@@ -0,0 +1,117 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.timelineservice.storage;
+
+import java.io.IOException;
+
+import org.junit.Test;
+import org.junit.Assert;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
+import 
org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCo

[hadoop] branch trunk updated: HDDS-1719 : Increase ratis log segment size to 1MB. (#1005)

2019-06-24 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 81d2f1b  HDDS-1719 : Increase ratis log segment size to 1MB. (#1005)
81d2f1b is described below

commit 81d2f1b724c02cc477b3d6ea7f286e2fad3e3c67
Author: avijayanhwx <14299376+avijayan...@users.noreply.github.com>
AuthorDate: Mon Jun 24 08:57:08 2019 -0700

HDDS-1719 : Increase ratis log segment size to 1MB. (#1005)

* HDDS-1719 : Increase ratis log segment size to 1MB.

* HDDS-1719 : Increase ratis log segment size to 1MB.
---
 .../src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java   | 2 +-
 hadoop-hdds/common/src/main/resources/ozone-default.xml   | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index c91d1c1..ae09c9d 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -74,7 +74,7 @@ public final class ScmConfigKeys {
   public static final String DFS_CONTAINER_RATIS_SEGMENT_SIZE_KEY =
   "dfs.container.ratis.segment.size";
   public static final String DFS_CONTAINER_RATIS_SEGMENT_SIZE_DEFAULT =
-  "16KB";
+  "1MB";
   public static final String DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_KEY 
=
   "dfs.container.ratis.segment.preallocated.size";
   public static final String
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index d32a6ee..427def9 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -205,10 +205,10 @@
   
   
 dfs.container.ratis.segment.size
-16KB
+1MB
 OZONE, RATIS, PERFORMANCE
 The size of the raft segment used by Apache Ratis on 
datanodes.
-  (16 KB by default)
+  (1 MB by default)
 
   
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14339. Inconsistent log level practices in RpcProgramNfs3.java. Contributed by Anuhan Torgonshar.

2019-06-24 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0514540  HDFS-14339. Inconsistent log level practices in 
RpcProgramNfs3.java. Contributed by Anuhan Torgonshar.
0514540 is described below

commit 05145404d54c6c3cc65833d317977ba12599514d
Author: Wei-Chiu Chuang 
AuthorDate: Mon Jun 24 08:30:48 2019 -0700

HDFS-14339. Inconsistent log level practices in RpcProgramNfs3.java. 
Contributed by Anuhan Torgonshar.
---
 .../src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
index ea5cdce..cb46f44 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
@@ -1836,7 +1836,7 @@ public class RpcProgramNfs3 extends RpcProgram implements 
Nfs3Interface {
 try {
   attr = writeManager.getFileAttr(dfsClient, childHandle, iug);
 } catch (IOException e) {
-  LOG.error("Can't get file attributes for fileId: {}", fileId, e);
+  LOG.info("Can't get file attributes for fileId: {}", fileId, e);
   continue;
 }
 entries[i] = new READDIRPLUS3Response.EntryPlus3(fileId,
@@ -1853,7 +1853,7 @@ public class RpcProgramNfs3 extends RpcProgram implements 
Nfs3Interface {
 try {
   attr = writeManager.getFileAttr(dfsClient, childHandle, iug);
 } catch (IOException e) {
-  LOG.error("Can't get file attributes for fileId: {}", fileId, e);
+  LOG.info("Can't get file attributes for fileId: {}", fileId, e);
   continue;
 }
 entries[i] = new READDIRPLUS3Response.EntryPlus3(fileId,


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org