hadoop git commit: HADOOP-15296. Fix a wrong link for RBF in the top page. Contributed by Takanobu Asanuma.

2018-03-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 583f45943 -> 4cc9a6d9b


HADOOP-15296. Fix a wrong link for RBF in the top page. Contributed by Takanobu 
Asanuma.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4cc9a6d9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4cc9a6d9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4cc9a6d9

Branch: refs/heads/trunk
Commit: 4cc9a6d9bb34329d6de30706d5432c7cb675bb88
Parents: 583f459
Author: Yiqun Lin 
Authored: Thu Mar 8 16:02:34 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Mar 8 16:02:34 2018 +0800

--
 hadoop-project/src/site/markdown/index.md.vm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4cc9a6d9/hadoop-project/src/site/markdown/index.md.vm
--
diff --git a/hadoop-project/src/site/markdown/index.md.vm 
b/hadoop-project/src/site/markdown/index.md.vm
index 9b2d9de..8b9cfda 100644
--- a/hadoop-project/src/site/markdown/index.md.vm
+++ b/hadoop-project/src/site/markdown/index.md.vm
@@ -223,7 +223,7 @@ functionality, except the mount table is managed on the 
server-side by the
 routing layer rather than on the client. This simplifies access to a federated
 cluster for existing HDFS clients.
 
-See [HDFS-10467](https://issues.apache.org/jira/browse/HADOOP-10467) and the
+See [HDFS-10467](https://issues.apache.org/jira/browse/HDFS-10467) and the
 HDFS Router-based Federation
 [documentation](./hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html) 
for
 more details.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15296. Fix a wrong link for RBF in the top page. Contributed by Takanobu Asanuma.

2018-03-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 885d46a24 -> 56f14cb5f


HADOOP-15296. Fix a wrong link for RBF in the top page. Contributed by Takanobu 
Asanuma.

(cherry picked from commit 4cc9a6d9bb34329d6de30706d5432c7cb675bb88)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/56f14cb5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/56f14cb5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/56f14cb5

Branch: refs/heads/branch-3.1
Commit: 56f14cb5fb6efe1f518d9202506f7b68f6912b7f
Parents: 885d46a
Author: Yiqun Lin 
Authored: Thu Mar 8 16:02:34 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Mar 8 16:04:20 2018 +0800

--
 hadoop-project/src/site/markdown/index.md.vm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/56f14cb5/hadoop-project/src/site/markdown/index.md.vm
--
diff --git a/hadoop-project/src/site/markdown/index.md.vm 
b/hadoop-project/src/site/markdown/index.md.vm
index 9b2d9de..8b9cfda 100644
--- a/hadoop-project/src/site/markdown/index.md.vm
+++ b/hadoop-project/src/site/markdown/index.md.vm
@@ -223,7 +223,7 @@ functionality, except the mount table is managed on the 
server-side by the
 routing layer rather than on the client. This simplifies access to a federated
 cluster for existing HDFS clients.
 
-See [HDFS-10467](https://issues.apache.org/jira/browse/HADOOP-10467) and the
+See [HDFS-10467](https://issues.apache.org/jira/browse/HDFS-10467) and the
 HDFS Router-based Federation
 [documentation](./hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html) 
for
 more details.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15296. Fix a wrong link for RBF in the top page. Contributed by Takanobu Asanuma.

2018-03-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 0b1e9665d -> 7a9aea17a


HADOOP-15296. Fix a wrong link for RBF in the top page. Contributed by Takanobu 
Asanuma.

(cherry picked from commit 4cc9a6d9bb34329d6de30706d5432c7cb675bb88)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7a9aea17
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7a9aea17
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7a9aea17

Branch: refs/heads/branch-3.0
Commit: 7a9aea17a2ecd5f9f42d6eef792c7d9a8f310e51
Parents: 0b1e966
Author: Yiqun Lin 
Authored: Thu Mar 8 16:02:34 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Mar 8 16:05:53 2018 +0800

--
 hadoop-project/src/site/markdown/index.md.vm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7a9aea17/hadoop-project/src/site/markdown/index.md.vm
--
diff --git a/hadoop-project/src/site/markdown/index.md.vm 
b/hadoop-project/src/site/markdown/index.md.vm
index 9b2d9de..8b9cfda 100644
--- a/hadoop-project/src/site/markdown/index.md.vm
+++ b/hadoop-project/src/site/markdown/index.md.vm
@@ -223,7 +223,7 @@ functionality, except the mount table is managed on the 
server-side by the
 routing layer rather than on the client. This simplifies access to a federated
 cluster for existing HDFS clients.
 
-See [HDFS-10467](https://issues.apache.org/jira/browse/HADOOP-10467) and the
+See [HDFS-10467](https://issues.apache.org/jira/browse/HDFS-10467) and the
 HDFS Router-based Federation
 [documentation](./hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html) 
for
 more details.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8011. TestOpportunisticContainerAllocatorAMService#testContainerPromoteAndDemoteBeforeContainerStart fails intermittently. Contributed by Tao Yang.

2018-03-08 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4cc9a6d9b -> b451889e8


YARN-8011. 
TestOpportunisticContainerAllocatorAMService#testContainerPromoteAndDemoteBeforeContainerStart
 fails intermittently. Contributed by Tao Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b451889e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b451889e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b451889e

Branch: refs/heads/trunk
Commit: b451889e8e83f7977f2b76789c61e823e2d40487
Parents: 4cc9a6d
Author: Weiwei Yang 
Authored: Thu Mar 8 18:13:36 2018 +0800
Committer: Weiwei Yang 
Committed: Thu Mar 8 18:13:36 2018 +0800

--
 ...pportunisticContainerAllocatorAMService.java | 29 ++--
 1 file changed, 15 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b451889e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java
index 1af930f..efa76bc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java
@@ -243,13 +243,13 @@ public class TestOpportunisticContainerAllocatorAMService 
{
 null, ExecutionType.GUARANTEED)));
 // Node on same host should not result in allocation
 sameHostDiffNode.nodeHeartbeat(true);
-Thread.sleep(200);
+rm.drainEvents();
 allocateResponse =  am1.allocate(new ArrayList<>(), new ArrayList<>());
 Assert.assertEquals(0, allocateResponse.getUpdatedContainers().size());
 
 // Wait for scheduler to process all events
 dispatcher.waitForEventThreadToWait();
-Thread.sleep(1000);
+rm.drainEvents();
 // Verify Metrics After OPP allocation (Nothing should change again)
 verifyMetrics(metrics, 15360, 15, 1024, 1, 1);
 
@@ -286,7 +286,7 @@ public class TestOpportunisticContainerAllocatorAMService {
 
 // Ensure after correct node heartbeats, we should get the allocation
 allocNode.nodeHeartbeat(true);
-Thread.sleep(200);
+rm.drainEvents();
 allocateResponse =  am1.allocate(new ArrayList<>(), new ArrayList<>());
 Assert.assertEquals(1, allocateResponse.getUpdatedContainers().size());
 Container uc =
@@ -303,7 +303,7 @@ public class TestOpportunisticContainerAllocatorAMService {
 nm2.nodeHeartbeat(true);
 nm3.nodeHeartbeat(true);
 nm4.nodeHeartbeat(true);
-Thread.sleep(200);
+rm.drainEvents();
 
 // Verify that the container is still in ACQUIRED state wrt the RM.
 RMContainer rmContainer = ((CapacityScheduler) scheduler)
@@ -325,6 +325,7 @@ public class TestOpportunisticContainerAllocatorAMService {
 
 // Wait for scheduler to finish processing events
 dispatcher.waitForEventThreadToWait();
+rm.drainEvents();
 // Verify Metrics After OPP allocation :
 // Everything should have reverted to what it was
 verifyMetrics(metrics, 15360, 15, 1024, 1, 1);
@@ -396,7 +397,7 @@ public class TestOpportunisticContainerAllocatorAMService {
 ContainerStatus.newInstance(container.getId(),
 ExecutionType.OPPORTUNISTIC, ContainerState.RUNNING, "", 0)),
 true);
-Thread.sleep(200);
+rm.drainEvents();
 
 // Verify that container is actually running wrt the RM..
 RMContainer rmContainer = ((CapacityScheduler) scheduler)
@@ -434,7 +435,7 @@ public class TestOpportunisticContainerAllocatorAMService {
 ContainerStatus.newInstance(container.getId(),
 ExecutionType.OPPORTUNISTIC, ContainerState.RUNNING, "", 0)),
 true);
-Thread.sleep(200);
+rm.drainEvents();
 
 allocateResponse =  am1.allocate(new ArrayList<>(), new ArrayList<>());
 Assert.assertEquals(1, allocateResponse.getUpdatedContainers().size());
@@ -521,7 +522,7 @@ public class TestOpportunisticContainerAllocatorAMService {
 ContainerStatus.newInstance(container.getId(),
 ExecutionType.OPPORTUNISTI

[2/2] hadoop git commit: HADOOP-15292. Distcp's use of pread is slowing it down. Contributed by Virajith Jalaparti.

2018-03-08 Thread stevel
HADOOP-15292. Distcp's use of pread is slowing it down.
Contributed by Virajith Jalaparti.

(cherry picked from commit 3bd6b1fd85c44354c777ef4fda6415231505b2a4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f0b486f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f0b486f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f0b486f6

Branch: refs/heads/branch-3.1
Commit: f0b486f6ae2cc76c7cf80a0681a16386c00d3390
Parents: 56f14cb
Author: Steve Loughran 
Authored: Thu Mar 8 11:18:33 2018 +
Committer: Steve Loughran 
Committed: Thu Mar 8 11:18:33 2018 +

--
 .../tools/mapred/RetriableFileCopyCommand.java  | 24 ++
 .../hadoop/tools/util/ThrottledInputStream.java | 48 +++-
 .../hadoop/tools/mapred/TestCopyMapper.java | 24 +-
 3 files changed, 66 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0b486f6/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
index 21f621a..0311061 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
@@ -260,7 +260,8 @@ public class RetriableFileCopyCommand extends 
RetriableCommand {
 boolean finished = false;
 try {
   inStream = getInputStream(source, context.getConfiguration());
-  int bytesRead = readBytes(inStream, buf, sourceOffset);
+  seekIfRequired(inStream, sourceOffset);
+  int bytesRead = readBytes(inStream, buf);
   while (bytesRead >= 0) {
 if (chunkLength > 0 &&
 (totalBytesRead + bytesRead) >= chunkLength) {
@@ -276,7 +277,7 @@ public class RetriableFileCopyCommand extends 
RetriableCommand {
 if (finished) {
   break;
 }
-bytesRead = readBytes(inStream, buf, sourceOffset);
+bytesRead = readBytes(inStream, buf);
   }
   outStream.close();
   outStream = null;
@@ -299,13 +300,20 @@ public class RetriableFileCopyCommand extends 
RetriableCommand {
 context.setStatus(message.toString());
   }
 
-  private static int readBytes(ThrottledInputStream inStream, byte buf[],
-  long position) throws IOException {
+  private static int readBytes(ThrottledInputStream inStream, byte buf[])
+  throws IOException {
+try {
+  return inStream.read(buf);
+} catch (IOException e) {
+  throw new CopyReadException(e);
+}
+  }
+
+  private static void seekIfRequired(ThrottledInputStream inStream,
+  long sourceOffset) throws IOException {
 try {
-  if (position == 0) {
-return inStream.read(buf);
-  } else {
-return inStream.read(position, buf, 0, buf.length);
+  if (sourceOffset != inStream.getPos()) {
+inStream.seek(sourceOffset);
   }
 } catch (IOException e) {
   throw new CopyReadException(e);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0b486f6/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
index 2d2f10c..4d3676a 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
@@ -18,7 +18,7 @@
 
 package org.apache.hadoop.tools.util;
 
-import org.apache.hadoop.fs.PositionedReadable;
+import org.apache.hadoop.fs.Seekable;
 
 import java.io.IOException;
 import java.io.InputStream;
@@ -33,7 +33,7 @@ import java.io.InputStream;
  * (Thus, while the read-rate might exceed the maximum for a given short 
interval,
  * the average tends towards the specified maximum, overall.)
  */
-public class ThrottledInputStream extends InputStream {
+public class ThrottledInputStream extends InputStream implements Seekable {
 
   private final InputStream rawStream;
   private final float maxBytesPerSec;
@@ -95,25 +95,6 @@ public class ThrottledInputStream extends InputStream {
 return readLen;
   }
 
-  /**
-   * Read bytes starting from the specified position. This requires rawStream 
is
-   * an instance of {@link PositionedR

[1/2] hadoop git commit: HADOOP-15292. Distcp's use of pread is slowing it down. Contributed by Virajith Jalaparti.

2018-03-08 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 56f14cb5f -> f0b486f6a
  refs/heads/trunk b451889e8 -> 3bd6b1fd8


HADOOP-15292. Distcp's use of pread is slowing it down.
Contributed by Virajith Jalaparti.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3bd6b1fd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3bd6b1fd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3bd6b1fd

Branch: refs/heads/trunk
Commit: 3bd6b1fd85c44354c777ef4fda6415231505b2a4
Parents: b451889
Author: Steve Loughran 
Authored: Thu Mar 8 11:15:46 2018 +
Committer: Steve Loughran 
Committed: Thu Mar 8 11:15:46 2018 +

--
 .../tools/mapred/RetriableFileCopyCommand.java  | 24 ++
 .../hadoop/tools/util/ThrottledInputStream.java | 48 +++-
 .../hadoop/tools/mapred/TestCopyMapper.java | 24 +-
 3 files changed, 66 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bd6b1fd/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
index 21f621a..0311061 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
@@ -260,7 +260,8 @@ public class RetriableFileCopyCommand extends 
RetriableCommand {
 boolean finished = false;
 try {
   inStream = getInputStream(source, context.getConfiguration());
-  int bytesRead = readBytes(inStream, buf, sourceOffset);
+  seekIfRequired(inStream, sourceOffset);
+  int bytesRead = readBytes(inStream, buf);
   while (bytesRead >= 0) {
 if (chunkLength > 0 &&
 (totalBytesRead + bytesRead) >= chunkLength) {
@@ -276,7 +277,7 @@ public class RetriableFileCopyCommand extends 
RetriableCommand {
 if (finished) {
   break;
 }
-bytesRead = readBytes(inStream, buf, sourceOffset);
+bytesRead = readBytes(inStream, buf);
   }
   outStream.close();
   outStream = null;
@@ -299,13 +300,20 @@ public class RetriableFileCopyCommand extends 
RetriableCommand {
 context.setStatus(message.toString());
   }
 
-  private static int readBytes(ThrottledInputStream inStream, byte buf[],
-  long position) throws IOException {
+  private static int readBytes(ThrottledInputStream inStream, byte buf[])
+  throws IOException {
+try {
+  return inStream.read(buf);
+} catch (IOException e) {
+  throw new CopyReadException(e);
+}
+  }
+
+  private static void seekIfRequired(ThrottledInputStream inStream,
+  long sourceOffset) throws IOException {
 try {
-  if (position == 0) {
-return inStream.read(buf);
-  } else {
-return inStream.read(position, buf, 0, buf.length);
+  if (sourceOffset != inStream.getPos()) {
+inStream.seek(sourceOffset);
   }
 } catch (IOException e) {
   throw new CopyReadException(e);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bd6b1fd/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
index 2d2f10c..4d3676a 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
@@ -18,7 +18,7 @@
 
 package org.apache.hadoop.tools.util;
 
-import org.apache.hadoop.fs.PositionedReadable;
+import org.apache.hadoop.fs.Seekable;
 
 import java.io.IOException;
 import java.io.InputStream;
@@ -33,7 +33,7 @@ import java.io.InputStream;
  * (Thus, while the read-rate might exceed the maximum for a given short 
interval,
  * the average tends towards the specified maximum, overall.)
  */
-public class ThrottledInputStream extends InputStream {
+public class ThrottledInputStream extends InputStream implements Seekable {
 
   private final InputStream rawStream;
   private final float maxBytesPerSec;
@@ -95,25 +95,6 @@ public class ThrottledInputStream extends InputStream {
 return readLen;
   }
 
-  /**
-   * Read bytes starting from the specified position. This requires 

[1/2] hadoop git commit: HADOOP-15273.distcp can't handle remote stores with different checksum algorithms. Contributed by Steve Loughran.

2018-03-08 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 f0b486f6a -> ba0184376
  refs/heads/trunk 3bd6b1fd8 -> 7ef4d942d


HADOOP-15273.distcp can't handle remote stores with different checksum 
algorithms.
Contributed by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7ef4d942
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7ef4d942
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7ef4d942

Branch: refs/heads/trunk
Commit: 7ef4d942dd96232b0743a40ed25f77065254f94d
Parents: 3bd6b1f
Author: Steve Loughran 
Authored: Thu Mar 8 11:24:06 2018 +
Committer: Steve Loughran 
Committed: Thu Mar 8 11:24:06 2018 +

--
 .../org/apache/hadoop/tools/DistCpOptions.java  |  5 
 .../tools/mapred/RetriableFileCopyCommand.java  | 29 +++-
 .../hadoop/tools/mapred/TestCopyMapper.java | 14 +-
 3 files changed, 29 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7ef4d942/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
index ece1a94..f33f7fd 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
@@ -534,11 +534,6 @@ public final class DistCpOptions {
 + "mutually exclusive");
   }
 
-  if (!syncFolder && skipCRC) {
-throw new IllegalArgumentException(
-"Skip CRC is valid only with update options");
-  }
-
   if (!syncFolder && append) {
 throw new IllegalArgumentException(
 "Append is valid only with update options");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7ef4d942/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
index 0311061..55f90d0 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
@@ -210,15 +210,30 @@ public class RetriableFileCopyCommand extends 
RetriableCommand {
   throws IOException {
 if (!DistCpUtils.checksumsAreEqual(sourceFS, source, sourceChecksum,
 targetFS, target)) {
-  StringBuilder errorMessage = new StringBuilder("Check-sum mismatch 
between ")
-  .append(source).append(" and ").append(target).append(".");
-  if (sourceFS.getFileStatus(source).getBlockSize() !=
+  StringBuilder errorMessage =
+  new StringBuilder("Checksum mismatch between ")
+  .append(source).append(" and ").append(target).append(".");
+  boolean addSkipHint = false;
+  String srcScheme = sourceFS.getScheme();
+  String targetScheme = targetFS.getScheme();
+  if (!srcScheme.equals(targetScheme)
+  && !(srcScheme.contains("hdfs") && targetScheme.contains("hdfs"))) {
+// the filesystems are different and they aren't both hdfs connectors
+errorMessage.append("Source and destination filesystems are of"
++ " different types\n")
+.append("Their checksum algorithms may be incompatible");
+addSkipHint = true;
+  } else if (sourceFS.getFileStatus(source).getBlockSize() !=
   targetFS.getFileStatus(target).getBlockSize()) {
-errorMessage.append(" Source and target differ in block-size.")
-.append(" Use -pb to preserve block-sizes during copy.")
-.append(" Alternatively, skip checksum-checks altogether, using 
-skipCrc.")
+errorMessage.append(" Source and target differ in block-size.\n")
+.append(" Use -pb to preserve block-sizes during copy.");
+addSkipHint = true;
+  }
+  if (addSkipHint) {
+errorMessage.append(" You can skip checksum-checks altogether "
++ " with -skipcrccheck.\n")
 .append(" (NOTE: By skipping checksums, one runs the risk of " +
-"masking data-corruption during file-transfer.)");
+"masking data-corruption during file-transfer.)\n");
   }
   throw new IOException(errorMessage.toString());
 }

http://git-wip-us.apache.org/rep

[2/2] hadoop git commit: HADOOP-15273.distcp can't handle remote stores with different checksum algorithms. Contributed by Steve Loughran.

2018-03-08 Thread stevel
HADOOP-15273.distcp can't handle remote stores with different checksum 
algorithms.
Contributed by Steve Loughran.

(cherry picked from commit 7ef4d942dd96232b0743a40ed25f77065254f94d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ba018437
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ba018437
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ba018437

Branch: refs/heads/branch-3.1
Commit: ba0184376e8b9e46657f6c6557d694529bb3ceb0
Parents: f0b486f
Author: Steve Loughran 
Authored: Thu Mar 8 11:24:06 2018 +
Committer: Steve Loughran 
Committed: Thu Mar 8 11:25:05 2018 +

--
 .../org/apache/hadoop/tools/DistCpOptions.java  |  5 
 .../tools/mapred/RetriableFileCopyCommand.java  | 29 +++-
 .../hadoop/tools/mapred/TestCopyMapper.java | 14 +-
 3 files changed, 29 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba018437/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
index ece1a94..f33f7fd 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
@@ -534,11 +534,6 @@ public final class DistCpOptions {
 + "mutually exclusive");
   }
 
-  if (!syncFolder && skipCRC) {
-throw new IllegalArgumentException(
-"Skip CRC is valid only with update options");
-  }
-
   if (!syncFolder && append) {
 throw new IllegalArgumentException(
 "Append is valid only with update options");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba018437/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
index 0311061..55f90d0 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
@@ -210,15 +210,30 @@ public class RetriableFileCopyCommand extends 
RetriableCommand {
   throws IOException {
 if (!DistCpUtils.checksumsAreEqual(sourceFS, source, sourceChecksum,
 targetFS, target)) {
-  StringBuilder errorMessage = new StringBuilder("Check-sum mismatch 
between ")
-  .append(source).append(" and ").append(target).append(".");
-  if (sourceFS.getFileStatus(source).getBlockSize() !=
+  StringBuilder errorMessage =
+  new StringBuilder("Checksum mismatch between ")
+  .append(source).append(" and ").append(target).append(".");
+  boolean addSkipHint = false;
+  String srcScheme = sourceFS.getScheme();
+  String targetScheme = targetFS.getScheme();
+  if (!srcScheme.equals(targetScheme)
+  && !(srcScheme.contains("hdfs") && targetScheme.contains("hdfs"))) {
+// the filesystems are different and they aren't both hdfs connectors
+errorMessage.append("Source and destination filesystems are of"
++ " different types\n")
+.append("Their checksum algorithms may be incompatible");
+addSkipHint = true;
+  } else if (sourceFS.getFileStatus(source).getBlockSize() !=
   targetFS.getFileStatus(target).getBlockSize()) {
-errorMessage.append(" Source and target differ in block-size.")
-.append(" Use -pb to preserve block-sizes during copy.")
-.append(" Alternatively, skip checksum-checks altogether, using 
-skipCrc.")
+errorMessage.append(" Source and target differ in block-size.\n")
+.append(" Use -pb to preserve block-sizes during copy.");
+addSkipHint = true;
+  }
+  if (addSkipHint) {
+errorMessage.append(" You can skip checksum-checks altogether "
++ " with -skipcrccheck.\n")
 .append(" (NOTE: By skipping checksums, one runs the risk of " +
-"masking data-corruption during file-transfer.)");
+"masking data-corruption during file-transfer.)\n");
   }
   throw new IOException(errorMessage.toString());
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba018437/hadoop-tools/hadoop-distc

hadoop git commit: HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns. Contributed by Chao Sun.

2018-03-08 Thread inigoiri
Repository: hadoop
Updated Branches:
  refs/heads/trunk 7ef4d942d -> 0c2b969e0


HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own 
active conns with its total conns. Contributed by Chao Sun.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0c2b969e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0c2b969e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0c2b969e

Branch: refs/heads/trunk
Commit: 0c2b969e0161a068bf9ae013c4b95508dfb90a8a
Parents: 7ef4d94
Author: Inigo Goiri 
Authored: Thu Mar 8 09:32:05 2018 -0800
Committer: Inigo Goiri 
Committed: Thu Mar 8 09:32:05 2018 -0800

--
 .../federation/router/ConnectionManager.java|  59 +-
 .../federation/router/ConnectionPoolId.java |   6 +
 .../router/TestConnectionManager.java   | 114 +++
 3 files changed, 153 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c2b969e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index 2e45280..594f489 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -32,6 +32,7 @@ import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -303,6 +304,38 @@ public class ConnectionManager {
 return JSON.toString(info);
   }
 
+  @VisibleForTesting
+  Map getPools() {
+return this.pools;
+  }
+
+  /**
+   * Clean the unused connections for this pool.
+   *
+   * @param pool Connection pool to cleanup.
+   */
+  @VisibleForTesting
+  void cleanup(ConnectionPool pool) {
+if (pool.getNumConnections() > pool.getMinSize()) {
+  // Check if the pool hasn't been active in a while or not 50% are used
+  long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
+  int total = pool.getNumConnections();
+  int active = pool.getNumActiveConnections();
+  if (timeSinceLastActive > connectionCleanupPeriodMs ||
+  active < MIN_ACTIVE_RATIO * total) {
+// Remove and close 1 connection
+List conns = pool.removeConnections(1);
+for (ConnectionContext conn : conns) {
+  conn.close();
+}
+LOG.debug("Removed connection {} used {} seconds ago. " +
+"Pool has {}/{} connections", pool.getConnectionPoolId(),
+TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
+pool.getNumConnections(), pool.getMaxSize());
+  }
+}
+  }
+
   /**
* Removes stale connections not accessed recently from the pool. This is
* invoked periodically.
@@ -350,32 +383,6 @@ public class ConnectionManager {
 }
   }
 }
-
-/**
- * Clean the unused connections for this pool.
- *
- * @param pool Connection pool to cleanup.
- */
-private void cleanup(ConnectionPool pool) {
-  if (pool.getNumConnections() > pool.getMinSize()) {
-// Check if the pool hasn't been active in a while or not 50% are used
-long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
-int total = pool.getNumConnections();
-int active = getNumActiveConnections();
-if (timeSinceLastActive > connectionCleanupPeriodMs ||
-active < MIN_ACTIVE_RATIO * total) {
-  // Remove and close 1 connection
-  List conns = pool.removeConnections(1);
-  for (ConnectionContext conn : conns) {
-conn.close();
-  }
-  LOG.debug("Removed connection {} used {} seconds ago. " +
-  "Pool has {}/{} connections", pool.getConnectionPoolId(),
-  TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
-  pool.getNumConnections(), pool.getMaxSize());
-}
-  }
-}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c2b969e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPoolId.java
-

hadoop git commit: HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns. Contributed by Chao Sun.

2018-03-08 Thread inigoiri
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 ba0184376 -> e27ee2302


HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own 
active conns with its total conns. Contributed by Chao Sun.

(cherry picked from commit 0c2b969e0161a068bf9ae013c4b95508dfb90a8a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e27ee230
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e27ee230
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e27ee230

Branch: refs/heads/branch-3.1
Commit: e27ee2302c2651edfbe04c41a5b38666a83f67d9
Parents: ba01843
Author: Inigo Goiri 
Authored: Thu Mar 8 09:32:05 2018 -0800
Committer: Inigo Goiri 
Committed: Thu Mar 8 09:32:51 2018 -0800

--
 .../federation/router/ConnectionManager.java|  59 +-
 .../federation/router/ConnectionPoolId.java |   6 +
 .../router/TestConnectionManager.java   | 114 +++
 3 files changed, 153 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e27ee230/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index 2e45280..594f489 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -32,6 +32,7 @@ import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -303,6 +304,38 @@ public class ConnectionManager {
 return JSON.toString(info);
   }
 
+  @VisibleForTesting
+  Map getPools() {
+return this.pools;
+  }
+
+  /**
+   * Clean the unused connections for this pool.
+   *
+   * @param pool Connection pool to cleanup.
+   */
+  @VisibleForTesting
+  void cleanup(ConnectionPool pool) {
+if (pool.getNumConnections() > pool.getMinSize()) {
+  // Check if the pool hasn't been active in a while or not 50% are used
+  long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
+  int total = pool.getNumConnections();
+  int active = pool.getNumActiveConnections();
+  if (timeSinceLastActive > connectionCleanupPeriodMs ||
+  active < MIN_ACTIVE_RATIO * total) {
+// Remove and close 1 connection
+List conns = pool.removeConnections(1);
+for (ConnectionContext conn : conns) {
+  conn.close();
+}
+LOG.debug("Removed connection {} used {} seconds ago. " +
+"Pool has {}/{} connections", pool.getConnectionPoolId(),
+TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
+pool.getNumConnections(), pool.getMaxSize());
+  }
+}
+  }
+
   /**
* Removes stale connections not accessed recently from the pool. This is
* invoked periodically.
@@ -350,32 +383,6 @@ public class ConnectionManager {
 }
   }
 }
-
-/**
- * Clean the unused connections for this pool.
- *
- * @param pool Connection pool to cleanup.
- */
-private void cleanup(ConnectionPool pool) {
-  if (pool.getNumConnections() > pool.getMinSize()) {
-// Check if the pool hasn't been active in a while or not 50% are used
-long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
-int total = pool.getNumConnections();
-int active = getNumActiveConnections();
-if (timeSinceLastActive > connectionCleanupPeriodMs ||
-active < MIN_ACTIVE_RATIO * total) {
-  // Remove and close 1 connection
-  List conns = pool.removeConnections(1);
-  for (ConnectionContext conn : conns) {
-conn.close();
-  }
-  LOG.debug("Removed connection {} used {} seconds ago. " +
-  "Pool has {}/{} connections", pool.getConnectionPoolId(),
-  TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
-  pool.getNumConnections(), pool.getMaxSize());
-}
-  }
-}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e27ee230/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/ro

hadoop git commit: HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns. Contributed by Chao Sun.

2018-03-08 Thread inigoiri
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 7a9aea17a -> 68cff8c23


HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own 
active conns with its total conns. Contributed by Chao Sun.

(cherry picked from commit 0c2b969e0161a068bf9ae013c4b95508dfb90a8a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/68cff8c2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/68cff8c2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/68cff8c2

Branch: refs/heads/branch-3.0
Commit: 68cff8c23533ce714edfb9a7f9c1781864e3dc08
Parents: 7a9aea1
Author: Inigo Goiri 
Authored: Thu Mar 8 09:32:05 2018 -0800
Committer: Inigo Goiri 
Committed: Thu Mar 8 09:33:24 2018 -0800

--
 .../federation/router/ConnectionManager.java|  59 +-
 .../federation/router/ConnectionPoolId.java |   6 +
 .../router/TestConnectionManager.java   | 114 +++
 3 files changed, 153 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/68cff8c2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index 2e45280..594f489 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -32,6 +32,7 @@ import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -303,6 +304,38 @@ public class ConnectionManager {
 return JSON.toString(info);
   }
 
+  @VisibleForTesting
+  Map getPools() {
+return this.pools;
+  }
+
+  /**
+   * Clean the unused connections for this pool.
+   *
+   * @param pool Connection pool to cleanup.
+   */
+  @VisibleForTesting
+  void cleanup(ConnectionPool pool) {
+if (pool.getNumConnections() > pool.getMinSize()) {
+  // Check if the pool hasn't been active in a while or not 50% are used
+  long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
+  int total = pool.getNumConnections();
+  int active = pool.getNumActiveConnections();
+  if (timeSinceLastActive > connectionCleanupPeriodMs ||
+  active < MIN_ACTIVE_RATIO * total) {
+// Remove and close 1 connection
+List conns = pool.removeConnections(1);
+for (ConnectionContext conn : conns) {
+  conn.close();
+}
+LOG.debug("Removed connection {} used {} seconds ago. " +
+"Pool has {}/{} connections", pool.getConnectionPoolId(),
+TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
+pool.getNumConnections(), pool.getMaxSize());
+  }
+}
+  }
+
   /**
* Removes stale connections not accessed recently from the pool. This is
* invoked periodically.
@@ -350,32 +383,6 @@ public class ConnectionManager {
 }
   }
 }
-
-/**
- * Clean the unused connections for this pool.
- *
- * @param pool Connection pool to cleanup.
- */
-private void cleanup(ConnectionPool pool) {
-  if (pool.getNumConnections() > pool.getMinSize()) {
-// Check if the pool hasn't been active in a while or not 50% are used
-long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
-int total = pool.getNumConnections();
-int active = getNumActiveConnections();
-if (timeSinceLastActive > connectionCleanupPeriodMs ||
-active < MIN_ACTIVE_RATIO * total) {
-  // Remove and close 1 connection
-  List conns = pool.removeConnections(1);
-  for (ConnectionContext conn : conns) {
-conn.close();
-  }
-  LOG.debug("Removed connection {} used {} seconds ago. " +
-  "Pool has {}/{} connections", pool.getConnectionPoolId(),
-  TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
-  pool.getNumConnections(), pool.getMaxSize());
-}
-  }
-}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/68cff8c2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/ro

hadoop git commit: HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns. Contributed by Chao Sun.

2018-03-08 Thread inigoiri
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 e3c80d739 -> 44abf77d9


HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own 
active conns with its total conns. Contributed by Chao Sun.

(cherry picked from commit 0c2b969e0161a068bf9ae013c4b95508dfb90a8a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/44abf77d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/44abf77d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/44abf77d

Branch: refs/heads/branch-2.9
Commit: 44abf77d9b371b0d06569bd4881356ac157b490e
Parents: e3c80d7
Author: Inigo Goiri 
Authored: Thu Mar 8 09:32:05 2018 -0800
Committer: Inigo Goiri 
Committed: Thu Mar 8 09:34:07 2018 -0800

--
 .../federation/router/ConnectionManager.java|  59 +-
 .../federation/router/ConnectionPoolId.java |   6 +
 .../router/TestConnectionManager.java   | 114 +++
 3 files changed, 153 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/44abf77d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index ebf1556..e94f69b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -32,6 +32,7 @@ import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -303,6 +304,38 @@ public class ConnectionManager {
 return JSON.toString(info);
   }
 
+  @VisibleForTesting
+  Map getPools() {
+return this.pools;
+  }
+
+  /**
+   * Clean the unused connections for this pool.
+   *
+   * @param pool Connection pool to cleanup.
+   */
+  @VisibleForTesting
+  void cleanup(ConnectionPool pool) {
+if (pool.getNumConnections() > pool.getMinSize()) {
+  // Check if the pool hasn't been active in a while or not 50% are used
+  long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
+  int total = pool.getNumConnections();
+  int active = pool.getNumActiveConnections();
+  if (timeSinceLastActive > connectionCleanupPeriodMs ||
+  active < MIN_ACTIVE_RATIO * total) {
+// Remove and close 1 connection
+List conns = pool.removeConnections(1);
+for (ConnectionContext conn : conns) {
+  conn.close();
+}
+LOG.debug("Removed connection {} used {} seconds ago. " +
+"Pool has {}/{} connections", pool.getConnectionPoolId(),
+TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
+pool.getNumConnections(), pool.getMaxSize());
+  }
+}
+  }
+
   /**
* Removes stale connections not accessed recently from the pool. This is
* invoked periodically.
@@ -350,32 +383,6 @@ public class ConnectionManager {
 }
   }
 }
-
-/**
- * Clean the unused connections for this pool.
- *
- * @param pool Connection pool to cleanup.
- */
-private void cleanup(ConnectionPool pool) {
-  if (pool.getNumConnections() > pool.getMinSize()) {
-// Check if the pool hasn't been active in a while or not 50% are used
-long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
-int total = pool.getNumConnections();
-int active = getNumActiveConnections();
-if (timeSinceLastActive > connectionCleanupPeriodMs ||
-active < MIN_ACTIVE_RATIO * total) {
-  // Remove and close 1 connection
-  List conns = pool.removeConnections(1);
-  for (ConnectionContext conn : conns) {
-conn.close();
-  }
-  LOG.debug("Removed connection {} used {} seconds ago. " +
-  "Pool has {}/{} connections", pool.getConnectionPoolId(),
-  TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
-  pool.getNumConnections(), pool.getMaxSize());
-}
-  }
-}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/44abf77d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/ro

hadoop git commit: HADOOP-15280. TestKMS.testWebHDFSProxyUserKerb and TestKMS.testWebHDFSProxyUserSimple fail in trunk. Contributed by Bharat Viswanadham.

2018-03-08 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0c2b969e0 -> a906a2264


HADOOP-15280. TestKMS.testWebHDFSProxyUserKerb and 
TestKMS.testWebHDFSProxyUserSimple fail in trunk. Contributed by Bharat 
Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a906a226
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a906a226
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a906a226

Branch: refs/heads/trunk
Commit: a906a226458a0b4c4b2df61d9bcf375a1d194925
Parents: 0c2b969
Author: Xiao Chen 
Authored: Thu Mar 8 10:16:37 2018 -0800
Committer: Xiao Chen 
Committed: Thu Mar 8 10:17:02 2018 -0800

--
 .../java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java  | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a906a226/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
index f7ecf44..1189fbf 100644
--- 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
+++ 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
@@ -2667,7 +2667,11 @@ public class TestKMS {
   kp.createKey("kbb", new KeyProvider.Options(conf));
   Assert.fail();
 } catch (Exception ex) {
-  Assert.assertTrue(ex.getMessage(), 
ex.getMessage().contains("Forbidden"));
+  GenericTestUtils.assertExceptionContains("Error while " +
+  "authenticating with endpoint", ex);
+  GenericTestUtils.assertExceptionContains("Forbidden", ex
+  .getCause().getCause());
+
 }
 return null;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-7944. [UI2] Remove master node link from headers of application pages. Contributed by Yesha Vora.

2018-03-08 Thread sunilg
Repository: hadoop
Updated Branches:
  refs/heads/trunk a906a2264 -> 113f401f4


YARN-7944. [UI2] Remove master node link from headers of application pages. 
Contributed by Yesha Vora.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/113f401f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/113f401f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/113f401f

Branch: refs/heads/trunk
Commit: 113f401f41ee575cb303ceb647bc243108d93a04
Parents: a906a22
Author: Sunil G 
Authored: Thu Mar 8 23:52:38 2018 +0530
Committer: Sunil G 
Committed: Thu Mar 8 23:53:36 2018 +0530

--
 .../src/main/webapp/app/models/yarn-app-timeline.js | 1 -
 .../hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js   | 5 -
 .../src/main/webapp/app/serializers/yarn-app-timeline.js| 1 -
 .../hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app.js  | 1 -
 .../hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs   | 2 --
 5 files changed, 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/113f401f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-timeline.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-timeline.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-timeline.js
index fa5223f..8b2702f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-timeline.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-timeline.js
@@ -30,7 +30,6 @@ export default DS.Model.extend({
   finishedTime: DS.attr('finishedTime'),
   progress: DS.attr('number'),
   diagnostics: DS.attr('string'),
-  amContainerLogs: DS.attr('string'),
   amHostHttpAddress: DS.attr('string'),
   logAggregationStatus: DS.attr('string'),
   unmanagedApplication: DS.attr('string'),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/113f401f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
index 5d0f23b..fcc8490 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
@@ -30,7 +30,6 @@ export default DS.Model.extend({
   finishedTime: DS.attr("finishedTime"),
   progress: DS.attr("number"),
   diagnostics: DS.attr("string"),
-  amContainerLogs: DS.attr("string"),
   amHostHttpAddress: DS.attr("string"),
   masterNodeId: DS.attr("string"),
   logAggregationStatus: DS.attr("string"),
@@ -97,10 +96,6 @@ export default DS.Model.extend({
 );
   }.property("memorySeconds", "vcoreSeconds"),
 
-  masterNodeURL: function() {
-return 
`#/yarn-node/${this.get("masterNodeId")}/${this.get("amHostHttpAddress")}/info/`;
-  }.property("masterNodeId", "amHostHttpAddress"),
-
   progressStyle: function() {
 return "width: " + this.get("progress") + "%";
   }.property("progress"),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/113f401f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-timeline.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-timeline.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-timeline.js
index 680fe8c..0496d77 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-timeline.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-timeline.js
@@ -45,7 +45,6 @@ export default DS.JSONAPISerializer.extend({
 progress: 100,
 applicationType: payload.info.YARN_APPLICATION_TYPE,
 diagnostics: (diagnostics && diagnostics !== 'null')? diagnostics : '',
-amContainerLogs: '',
 amHostHttpAddress: '',
 logAggregationStatus: '',
 unmanagedApplication: 
payload.info.YARN_APPLICATION_UNMANAGED_APPLICATION || 'N/A',

http://git-wip-us.apache.org/repos/asf/hadoop/blob/113f401f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app.js
 
b/hadoop-yarn-pr

hadoop git commit: YARN-7944. [UI2] Remove master node link from headers of application pages. Contributed by Yesha Vora.

2018-03-08 Thread sunilg
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 e27ee2302 -> f313e5949


YARN-7944. [UI2] Remove master node link from headers of application pages. 
Contributed by Yesha Vora.

(cherry picked from commit 113f401f41ee575cb303ceb647bc243108d93a04)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f313e594
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f313e594
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f313e594

Branch: refs/heads/branch-3.1
Commit: f313e594940c5de4f15be9f8da4e8e3e8ee3c81d
Parents: e27ee23
Author: Sunil G 
Authored: Thu Mar 8 23:52:38 2018 +0530
Committer: Sunil G 
Committed: Thu Mar 8 23:54:15 2018 +0530

--
 .../src/main/webapp/app/models/yarn-app-timeline.js | 1 -
 .../hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js   | 5 -
 .../src/main/webapp/app/serializers/yarn-app-timeline.js| 1 -
 .../hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app.js  | 1 -
 .../hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs   | 2 --
 5 files changed, 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f313e594/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-timeline.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-timeline.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-timeline.js
index fa5223f..8b2702f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-timeline.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-timeline.js
@@ -30,7 +30,6 @@ export default DS.Model.extend({
   finishedTime: DS.attr('finishedTime'),
   progress: DS.attr('number'),
   diagnostics: DS.attr('string'),
-  amContainerLogs: DS.attr('string'),
   amHostHttpAddress: DS.attr('string'),
   logAggregationStatus: DS.attr('string'),
   unmanagedApplication: DS.attr('string'),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f313e594/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
index 5d0f23b..fcc8490 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
@@ -30,7 +30,6 @@ export default DS.Model.extend({
   finishedTime: DS.attr("finishedTime"),
   progress: DS.attr("number"),
   diagnostics: DS.attr("string"),
-  amContainerLogs: DS.attr("string"),
   amHostHttpAddress: DS.attr("string"),
   masterNodeId: DS.attr("string"),
   logAggregationStatus: DS.attr("string"),
@@ -97,10 +96,6 @@ export default DS.Model.extend({
 );
   }.property("memorySeconds", "vcoreSeconds"),
 
-  masterNodeURL: function() {
-return 
`#/yarn-node/${this.get("masterNodeId")}/${this.get("amHostHttpAddress")}/info/`;
-  }.property("masterNodeId", "amHostHttpAddress"),
-
   progressStyle: function() {
 return "width: " + this.get("progress") + "%";
   }.property("progress"),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f313e594/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-timeline.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-timeline.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-timeline.js
index 680fe8c..0496d77 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-timeline.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-timeline.js
@@ -45,7 +45,6 @@ export default DS.JSONAPISerializer.extend({
 progress: 100,
 applicationType: payload.info.YARN_APPLICATION_TYPE,
 diagnostics: (diagnostics && diagnostics !== 'null')? diagnostics : '',
-amContainerLogs: '',
 amHostHttpAddress: '',
 logAggregationStatus: '',
 unmanagedApplication: 
payload.info.YARN_APPLICATION_UNMANAGED_APPLICATION || 'N/A',

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f313e594/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-ya

hadoop git commit: HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns. Contributed by Chao Sun.

2018-03-08 Thread inigoiri
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 207daabbc -> 95f34ce46


HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own 
active conns with its total conns. Contributed by Chao Sun.

(cherry picked from commit 0c2b969e0161a068bf9ae013c4b95508dfb90a8a)
(cherry picked from commit 44abf77d9b371b0d06569bd4881356ac157b490e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/95f34ce4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/95f34ce4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/95f34ce4

Branch: refs/heads/branch-2
Commit: 95f34ce462b23b632147a9877c9904c11832afce
Parents: 207daab
Author: Inigo Goiri 
Authored: Thu Mar 8 09:32:05 2018 -0800
Committer: Inigo Goiri 
Committed: Thu Mar 8 10:24:41 2018 -0800

--
 .../federation/router/ConnectionManager.java|  59 +-
 .../federation/router/ConnectionPoolId.java |   6 +
 .../router/TestConnectionManager.java   | 114 +++
 3 files changed, 153 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/95f34ce4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index ebf1556..e94f69b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -32,6 +32,7 @@ import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -303,6 +304,38 @@ public class ConnectionManager {
 return JSON.toString(info);
   }
 
+  @VisibleForTesting
+  Map getPools() {
+return this.pools;
+  }
+
+  /**
+   * Clean the unused connections for this pool.
+   *
+   * @param pool Connection pool to cleanup.
+   */
+  @VisibleForTesting
+  void cleanup(ConnectionPool pool) {
+if (pool.getNumConnections() > pool.getMinSize()) {
+  // Check if the pool hasn't been active in a while or not 50% are used
+  long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
+  int total = pool.getNumConnections();
+  int active = pool.getNumActiveConnections();
+  if (timeSinceLastActive > connectionCleanupPeriodMs ||
+  active < MIN_ACTIVE_RATIO * total) {
+// Remove and close 1 connection
+List conns = pool.removeConnections(1);
+for (ConnectionContext conn : conns) {
+  conn.close();
+}
+LOG.debug("Removed connection {} used {} seconds ago. " +
+"Pool has {}/{} connections", pool.getConnectionPoolId(),
+TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
+pool.getNumConnections(), pool.getMaxSize());
+  }
+}
+  }
+
   /**
* Removes stale connections not accessed recently from the pool. This is
* invoked periodically.
@@ -350,32 +383,6 @@ public class ConnectionManager {
 }
   }
 }
-
-/**
- * Clean the unused connections for this pool.
- *
- * @param pool Connection pool to cleanup.
- */
-private void cleanup(ConnectionPool pool) {
-  if (pool.getNumConnections() > pool.getMinSize()) {
-// Check if the pool hasn't been active in a while or not 50% are used
-long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
-int total = pool.getNumConnections();
-int active = getNumActiveConnections();
-if (timeSinceLastActive > connectionCleanupPeriodMs ||
-active < MIN_ACTIVE_RATIO * total) {
-  // Remove and close 1 connection
-  List conns = pool.removeConnections(1);
-  for (ConnectionContext conn : conns) {
-conn.close();
-  }
-  LOG.debug("Removed connection {} used {} seconds ago. " +
-  "Pool has {}/{} connections", pool.getConnectionPoolId(),
-  TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
-  pool.getNumConnections(), pool.getMaxSize());
-}
-  }
-}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95f34ce4/hadoop-hdfs-project/hado

hadoop git commit: HDFS-12614. FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured. Contributed by Manoj Govindassamy.

2018-03-08 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 95f34ce46 -> 0aa52d408


HDFS-12614. FSPermissionChecker#getINodeAttrs() throws NPE when 
INodeAttributesProvider configured. Contributed by Manoj Govindassamy.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0aa52d40
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0aa52d40
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0aa52d40

Branch: refs/heads/branch-2
Commit: 0aa52d4085f8c77dbfd9d913011c347882573ad9
Parents: 95f34ce
Author: Kihwal Lee 
Authored: Thu Mar 8 14:20:09 2018 -0600
Committer: Kihwal Lee 
Committed: Thu Mar 8 14:22:17 2018 -0600

--
 .../server/namenode/FSPermissionChecker.java| 12 +++-
 .../namenode/TestINodeAttributeProvider.java| 63 +++-
 2 files changed, 57 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0aa52d40/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
index 22f9b99..8c53308 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
@@ -271,8 +271,16 @@ public class FSPermissionChecker implements 
AccessControlEnforcer {
 INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
 if (getAttributesProvider() != null) {
   String[] elements = new String[pathIdx + 1];
-  for (int i = 0; i < elements.length; i++) {
-elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
+  /**
+   * {@link INode#getPathComponents(String)} returns a null component
+   * for the root only path "/". Assign an empty string if so.
+   */
+  if (pathByNameArr.length == 1 && pathByNameArr[0] == null) {
+elements[0] = "";
+  } else {
+for (int i = 0; i < elements.length; i++) {
+  elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
+}
   }
   inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0aa52d40/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
index bbc5fa0..5495692 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
@@ -313,31 +313,62 @@ public class TestINodeAttributeProvider {
 testBypassProviderHelper(users, HDFS_PERMISSION, true);
   }
 
-  @Test
-  public void testCustomProvider() throws Exception {
+  private void verifyFileStatus(UserGroupInformation ugi) throws IOException {
 FileSystem fs = FileSystem.get(miniDFS.getConfiguration(0));
-fs.mkdirs(new Path("/user/xxx"));
-FileStatus status = fs.getFileStatus(new Path("/user/xxx"));
-Assert.assertEquals(System.getProperty("user.name"), status.getOwner());
+
+FileStatus status = fs.getFileStatus(new Path("/"));
+LOG.info("Path '/' is owned by: "
++ status.getOwner() + ":" + status.getGroup());
+
+Path userDir = new Path("/user/" + ugi.getShortUserName());
+fs.mkdirs(userDir);
+status = fs.getFileStatus(userDir);
+Assert.assertEquals(ugi.getShortUserName(), status.getOwner());
 Assert.assertEquals("supergroup", status.getGroup());
 Assert.assertEquals(new FsPermission((short) 0755), 
status.getPermission());
-fs.mkdirs(new Path("/user/authz"));
-Path p = new Path("/user/authz");
-status = fs.getFileStatus(p);
+
+Path authzDir = new Path("/user/authz");
+fs.mkdirs(authzDir);
+status = fs.getFileStatus(authzDir);
 Assert.assertEquals("foo", status.getOwner());
 Assert.assertEquals("bar", status.getGroup());
 Assert.assertEquals(new FsPermission((short) 0770), 
status.getPermission());
-AclStatus aclStatus = fs.getAclStatus(p);
+
+AclStatus aclStatus = fs.getAclStatus(authzDir);
 Assert.assertEquals(

hadoop git commit: HDFS-12614. FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured. Contributed by Manoj Govindassamy.

2018-03-08 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 44abf77d9 -> 6fafaa7b7


HDFS-12614. FSPermissionChecker#getINodeAttrs() throws NPE when 
INodeAttributesProvider configured. Contributed by Manoj Govindassamy.

(cherry picked from commit 0aa52d4085f8c77dbfd9d913011c347882573ad9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6fafaa7b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6fafaa7b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6fafaa7b

Branch: refs/heads/branch-2.9
Commit: 6fafaa7b7fafa2005386779fb4c92750f256c336
Parents: 44abf77
Author: Kihwal Lee 
Authored: Thu Mar 8 14:26:42 2018 -0600
Committer: Kihwal Lee 
Committed: Thu Mar 8 14:26:42 2018 -0600

--
 .../server/namenode/FSPermissionChecker.java| 12 +++-
 .../namenode/TestINodeAttributeProvider.java| 63 +++-
 2 files changed, 57 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fafaa7b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
index 388e91f..ec82ab7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
@@ -271,8 +271,16 @@ public class FSPermissionChecker implements 
AccessControlEnforcer {
 INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
 if (getAttributesProvider() != null) {
   String[] elements = new String[pathIdx + 1];
-  for (int i = 0; i < elements.length; i++) {
-elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
+  /**
+   * {@link INode#getPathComponents(String)} returns a null component
+   * for the root only path "/". Assign an empty string if so.
+   */
+  if (pathByNameArr.length == 1 && pathByNameArr[0] == null) {
+elements[0] = "";
+  } else {
+for (int i = 0; i < elements.length; i++) {
+  elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
+}
   }
   inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fafaa7b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
index bbc5fa0..5495692 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
@@ -313,31 +313,62 @@ public class TestINodeAttributeProvider {
 testBypassProviderHelper(users, HDFS_PERMISSION, true);
   }
 
-  @Test
-  public void testCustomProvider() throws Exception {
+  private void verifyFileStatus(UserGroupInformation ugi) throws IOException {
 FileSystem fs = FileSystem.get(miniDFS.getConfiguration(0));
-fs.mkdirs(new Path("/user/xxx"));
-FileStatus status = fs.getFileStatus(new Path("/user/xxx"));
-Assert.assertEquals(System.getProperty("user.name"), status.getOwner());
+
+FileStatus status = fs.getFileStatus(new Path("/"));
+LOG.info("Path '/' is owned by: "
++ status.getOwner() + ":" + status.getGroup());
+
+Path userDir = new Path("/user/" + ugi.getShortUserName());
+fs.mkdirs(userDir);
+status = fs.getFileStatus(userDir);
+Assert.assertEquals(ugi.getShortUserName(), status.getOwner());
 Assert.assertEquals("supergroup", status.getGroup());
 Assert.assertEquals(new FsPermission((short) 0755), 
status.getPermission());
-fs.mkdirs(new Path("/user/authz"));
-Path p = new Path("/user/authz");
-status = fs.getFileStatus(p);
+
+Path authzDir = new Path("/user/authz");
+fs.mkdirs(authzDir);
+status = fs.getFileStatus(authzDir);
 Assert.assertEquals("foo", status.getOwner());
 Assert.assertEquals("bar", status.getGroup());
 Assert.assertEquals(new FsPermission((short) 0770), 
status.getPermission());
-AclStatus aclStatus = fs.getAclStatus(p);
+
+

hadoop git commit: HDFS-12614. FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured. Contributed by Manoj Govindassamy.

2018-03-08 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 4b0b466f8 -> f4621e022


HDFS-12614. FSPermissionChecker#getINodeAttrs() throws NPE when 
INodeAttributesProvider configured. Contributed by Manoj Govindassamy.

(cherry picked from commit 0aa52d4085f8c77dbfd9d913011c347882573ad9)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f4621e02
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f4621e02
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f4621e02

Branch: refs/heads/branch-2.8
Commit: f4621e0221ef14291ac616dc43c673b50a6d1550
Parents: 4b0b466
Author: Kihwal Lee 
Authored: Thu Mar 8 14:38:47 2018 -0600
Committer: Kihwal Lee 
Committed: Thu Mar 8 14:38:47 2018 -0600

--
 .../server/namenode/FSPermissionChecker.java| 12 +++-
 .../namenode/TestINodeAttributeProvider.java| 60 ++--
 2 files changed, 54 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f4621e02/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
index 107d563..46d0959 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
@@ -239,8 +239,16 @@ class FSPermissionChecker implements AccessControlEnforcer 
{
 INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
 if (getAttributesProvider() != null) {
   String[] elements = new String[pathIdx + 1];
-  for (int i = 0; i < elements.length; i++) {
-elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
+  /**
+   * {@link INode#getPathComponents(String)} returns a null component
+   * for the root only path "/". Assign an empty string if so.
+   */
+  if (pathByNameArr.length == 1 && pathByNameArr[0] == null) {
+elements[0] = "";
+  } else {
+for (int i = 0; i < elements.length; i++) {
+  elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
+}
   }
   inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f4621e02/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
index ffdc535..1837525 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
@@ -222,31 +222,59 @@ public class TestINodeAttributeProvider {
 });
   }
 
-  @Test
-  public void testCustomProvider() throws Exception {
+  private void verifyFileStatus(UserGroupInformation ugi) throws IOException {
 FileSystem fs = FileSystem.get(miniDFS.getConfiguration(0));
-fs.mkdirs(new Path("/user/xxx"));
-FileStatus status = fs.getFileStatus(new Path("/user/xxx"));
-Assert.assertEquals(System.getProperty("user.name"), status.getOwner());
+
+FileStatus status = fs.getFileStatus(new Path("/"));
+Path userDir = new Path("/user/" + ugi.getShortUserName());
+fs.mkdirs(userDir);
+status = fs.getFileStatus(userDir);
+Assert.assertEquals(ugi.getShortUserName(), status.getOwner());
 Assert.assertEquals("supergroup", status.getGroup());
 Assert.assertEquals(new FsPermission((short) 0755), 
status.getPermission());
-fs.mkdirs(new Path("/user/authz"));
-Path p = new Path("/user/authz");
-status = fs.getFileStatus(p);
+
+Path authzDir = new Path("/user/authz");
+fs.mkdirs(authzDir);
+status = fs.getFileStatus(authzDir);
 Assert.assertEquals("foo", status.getOwner());
 Assert.assertEquals("bar", status.getGroup());
 Assert.assertEquals(new FsPermission((short) 0770), 
status.getPermission());
-AclStatus aclStatus = fs.getAclStatus(p);
+
+AclStatus aclStatus

hadoop git commit: HDFS-13233. RBF: MountTableResolver doesn't return the correct mount point of the given path. Contributed by wangzhiyuan.

2018-03-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 113f401f4 -> 122805b43


HDFS-13233. RBF: MountTableResolver doesn't return the correct mount point of 
the given path. Contributed by wangzhiyuan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/122805b4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/122805b4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/122805b4

Branch: refs/heads/trunk
Commit: 122805b43acff2b094bd984fa76dbc8d2e110edd
Parents: 113f401
Author: Yiqun Lin 
Authored: Fri Mar 9 15:42:57 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Mar 9 15:42:57 2018 +0800

--
 .../federation/resolver/MountTableResolver.java | 13 ++-
 .../resolver/TestMountTableResolver.java| 23 
 .../federation/router/TestRouterQuota.java  |  2 +-
 3 files changed, 36 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/122805b4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
index 374e3ba..dac6f7f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
@@ -521,6 +521,17 @@ public class MountTableResolver
 return this.defaultNameService;
   }
 
+  private boolean isParentEntry(final String path, final String parent) {
+if (!path.startsWith(parent)) {
+  return false;
+}
+if (path.equals(parent)) {
+  return true;
+}
+return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
+|| parent.equals(Path.SEPARATOR);
+  }
+
   /**
* Find the deepest mount point for a path.
* @param path Path to look for.
@@ -530,7 +541,7 @@ public class MountTableResolver
 readLock.lock();
 try {
   Entry entry = this.tree.floorEntry(path);
-  while (entry != null && !path.startsWith(entry.getKey())) {
+  while (entry != null && !isParentEntry(path, entry.getKey())) {
 entry = this.tree.lowerEntry(entry.getKey());
   }
   if (entry == null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/122805b4/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
index fa2f89c..a09daf0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
@@ -179,6 +179,29 @@ public class TestMountTableResolver {
   }
 
   @Test
+  public void testGetMountPoint() throws IOException {
+// Check get the mount table entry for a path
+MountTable mtEntry;
+mtEntry = mountTable.getMountPoint("/");
+assertTrue(mtEntry.getSourcePath().equals("/"));
+
+mtEntry = mountTable.getMountPoint("/user");
+assertTrue(mtEntry.getSourcePath().equals("/user"));
+
+mtEntry = mountTable.getMountPoint("/user/a");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a/");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a/11");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a1");
+assertTrue(mtEntry.getSourcePath().equals("/user"));
+  }
+
+  @Test
   public void testGetMountPoints() throws IOException {
 
 // Check getting all mount points (virtual and real) beneath a path

http://git-wip-us.apache.org/repos/asf/hadoop/blob/122805b4/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/or

hadoop git commit: HDFS-13233. RBF: MountTableResolver doesn't return the correct mount point of the given path. Contributed by wangzhiyuan.

2018-03-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 f313e5949 -> 9368f7fba


HDFS-13233. RBF: MountTableResolver doesn't return the correct mount point of 
the given path. Contributed by wangzhiyuan.

(cherry picked from commit 122805b43acff2b094bd984fa76dbc8d2e110edd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9368f7fb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9368f7fb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9368f7fb

Branch: refs/heads/branch-3.1
Commit: 9368f7fbabaf4b60624cee368db8a80ba58545b4
Parents: f313e59
Author: Yiqun Lin 
Authored: Fri Mar 9 15:42:57 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Mar 9 15:44:35 2018 +0800

--
 .../federation/resolver/MountTableResolver.java | 13 ++-
 .../resolver/TestMountTableResolver.java| 23 
 .../federation/router/TestRouterQuota.java  |  2 +-
 3 files changed, 36 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9368f7fb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
index 374e3ba..dac6f7f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
@@ -521,6 +521,17 @@ public class MountTableResolver
 return this.defaultNameService;
   }
 
+  private boolean isParentEntry(final String path, final String parent) {
+if (!path.startsWith(parent)) {
+  return false;
+}
+if (path.equals(parent)) {
+  return true;
+}
+return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
+|| parent.equals(Path.SEPARATOR);
+  }
+
   /**
* Find the deepest mount point for a path.
* @param path Path to look for.
@@ -530,7 +541,7 @@ public class MountTableResolver
 readLock.lock();
 try {
   Entry entry = this.tree.floorEntry(path);
-  while (entry != null && !path.startsWith(entry.getKey())) {
+  while (entry != null && !isParentEntry(path, entry.getKey())) {
 entry = this.tree.lowerEntry(entry.getKey());
   }
   if (entry == null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9368f7fb/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
index fa2f89c..a09daf0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
@@ -179,6 +179,29 @@ public class TestMountTableResolver {
   }
 
   @Test
+  public void testGetMountPoint() throws IOException {
+// Check get the mount table entry for a path
+MountTable mtEntry;
+mtEntry = mountTable.getMountPoint("/");
+assertTrue(mtEntry.getSourcePath().equals("/"));
+
+mtEntry = mountTable.getMountPoint("/user");
+assertTrue(mtEntry.getSourcePath().equals("/user"));
+
+mtEntry = mountTable.getMountPoint("/user/a");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a/");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a/11");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a1");
+assertTrue(mtEntry.getSourcePath().equals("/user"));
+  }
+
+  @Test
   public void testGetMountPoints() throws IOException {
 
 // Check getting all mount points (virtual and real) beneath a path

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9368f7fb/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/

hadoop git commit: HDFS-13233. RBF: MountTableResolver doesn't return the correct mount point of the given path. Contributed by wangzhiyuan.

2018-03-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 68cff8c23 -> f879504fe


HDFS-13233. RBF: MountTableResolver doesn't return the correct mount point of 
the given path. Contributed by wangzhiyuan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f879504f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f879504f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f879504f

Branch: refs/heads/branch-3.0
Commit: f879504fe13267690e4306160210536c1b21b8e3
Parents: 68cff8c
Author: Yiqun Lin 
Authored: Fri Mar 9 15:48:36 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Mar 9 15:48:36 2018 +0800

--
 .../federation/resolver/MountTableResolver.java | 13 ++-
 .../resolver/TestMountTableResolver.java| 23 
 2 files changed, 35 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f879504f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
index 374e3ba..dac6f7f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
@@ -521,6 +521,17 @@ public class MountTableResolver
 return this.defaultNameService;
   }
 
+  private boolean isParentEntry(final String path, final String parent) {
+if (!path.startsWith(parent)) {
+  return false;
+}
+if (path.equals(parent)) {
+  return true;
+}
+return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
+|| parent.equals(Path.SEPARATOR);
+  }
+
   /**
* Find the deepest mount point for a path.
* @param path Path to look for.
@@ -530,7 +541,7 @@ public class MountTableResolver
 readLock.lock();
 try {
   Entry entry = this.tree.floorEntry(path);
-  while (entry != null && !path.startsWith(entry.getKey())) {
+  while (entry != null && !isParentEntry(path, entry.getKey())) {
 entry = this.tree.lowerEntry(entry.getKey());
   }
   if (entry == null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f879504f/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
index fa2f89c..a09daf0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
@@ -179,6 +179,29 @@ public class TestMountTableResolver {
   }
 
   @Test
+  public void testGetMountPoint() throws IOException {
+// Check get the mount table entry for a path
+MountTable mtEntry;
+mtEntry = mountTable.getMountPoint("/");
+assertTrue(mtEntry.getSourcePath().equals("/"));
+
+mtEntry = mountTable.getMountPoint("/user");
+assertTrue(mtEntry.getSourcePath().equals("/user"));
+
+mtEntry = mountTable.getMountPoint("/user/a");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a/");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a/11");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a1");
+assertTrue(mtEntry.getSourcePath().equals("/user"));
+  }
+
+  @Test
   public void testGetMountPoints() throws IOException {
 
 // Check getting all mount points (virtual and real) beneath a path


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13233. RBF: MountTableResolver doesn't return the correct mount point of the given path. Contributed by wangzhiyuan.

2018-03-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 0aa52d408 -> aa748c6bc


HDFS-13233. RBF: MountTableResolver doesn't return the correct mount point of 
the given path. Contributed by wangzhiyuan.

(cherry picked from commit 122805b43acff2b094bd984fa76dbc8d2e110edd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aa748c6b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aa748c6b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aa748c6b

Branch: refs/heads/branch-2
Commit: aa748c6bca6ee75eec24c27f7fb32eda01b720b6
Parents: 0aa52d4
Author: Yiqun Lin 
Authored: Fri Mar 9 15:42:57 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Mar 9 15:51:56 2018 +0800

--
 .../federation/resolver/MountTableResolver.java | 13 ++-
 .../resolver/TestMountTableResolver.java| 23 
 .../federation/router/TestRouterQuota.java  |  2 +-
 3 files changed, 36 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa748c6b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
index 374e3ba..dac6f7f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
@@ -521,6 +521,17 @@ public class MountTableResolver
 return this.defaultNameService;
   }
 
+  private boolean isParentEntry(final String path, final String parent) {
+if (!path.startsWith(parent)) {
+  return false;
+}
+if (path.equals(parent)) {
+  return true;
+}
+return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
+|| parent.equals(Path.SEPARATOR);
+  }
+
   /**
* Find the deepest mount point for a path.
* @param path Path to look for.
@@ -530,7 +541,7 @@ public class MountTableResolver
 readLock.lock();
 try {
   Entry entry = this.tree.floorEntry(path);
-  while (entry != null && !path.startsWith(entry.getKey())) {
+  while (entry != null && !isParentEntry(path, entry.getKey())) {
 entry = this.tree.lowerEntry(entry.getKey());
   }
   if (entry == null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa748c6b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
index fa2f89c..a09daf0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
@@ -179,6 +179,29 @@ public class TestMountTableResolver {
   }
 
   @Test
+  public void testGetMountPoint() throws IOException {
+// Check get the mount table entry for a path
+MountTable mtEntry;
+mtEntry = mountTable.getMountPoint("/");
+assertTrue(mtEntry.getSourcePath().equals("/"));
+
+mtEntry = mountTable.getMountPoint("/user");
+assertTrue(mtEntry.getSourcePath().equals("/user"));
+
+mtEntry = mountTable.getMountPoint("/user/a");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a/");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a/11");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a1");
+assertTrue(mtEntry.getSourcePath().equals("/user"));
+  }
+
+  @Test
   public void testGetMountPoints() throws IOException {
 
 // Check getting all mount points (virtual and real) beneath a path

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa748c6b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/rout

hadoop git commit: HDFS-13233. RBF: MountTableResolver doesn't return the correct mount point of the given path. Contributed by wangzhiyuan.

2018-03-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 6fafaa7b7 -> 04e18e747


HDFS-13233. RBF: MountTableResolver doesn't return the correct mount point of 
the given path. Contributed by wangzhiyuan.

(cherry picked from commit f879504fe13267690e4306160210536c1b21b8e3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/04e18e74
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/04e18e74
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/04e18e74

Branch: refs/heads/branch-2.9
Commit: 04e18e747b3bef9477c0897822101246029dfa2a
Parents: 6fafaa7
Author: Yiqun Lin 
Authored: Fri Mar 9 15:48:36 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Mar 9 15:53:17 2018 +0800

--
 .../federation/resolver/MountTableResolver.java | 13 ++-
 .../resolver/TestMountTableResolver.java| 23 
 2 files changed, 35 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/04e18e74/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
index 374e3ba..dac6f7f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
@@ -521,6 +521,17 @@ public class MountTableResolver
 return this.defaultNameService;
   }
 
+  private boolean isParentEntry(final String path, final String parent) {
+if (!path.startsWith(parent)) {
+  return false;
+}
+if (path.equals(parent)) {
+  return true;
+}
+return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
+|| parent.equals(Path.SEPARATOR);
+  }
+
   /**
* Find the deepest mount point for a path.
* @param path Path to look for.
@@ -530,7 +541,7 @@ public class MountTableResolver
 readLock.lock();
 try {
   Entry entry = this.tree.floorEntry(path);
-  while (entry != null && !path.startsWith(entry.getKey())) {
+  while (entry != null && !isParentEntry(path, entry.getKey())) {
 entry = this.tree.lowerEntry(entry.getKey());
   }
   if (entry == null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/04e18e74/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
index fa2f89c..a09daf0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
@@ -179,6 +179,29 @@ public class TestMountTableResolver {
   }
 
   @Test
+  public void testGetMountPoint() throws IOException {
+// Check get the mount table entry for a path
+MountTable mtEntry;
+mtEntry = mountTable.getMountPoint("/");
+assertTrue(mtEntry.getSourcePath().equals("/"));
+
+mtEntry = mountTable.getMountPoint("/user");
+assertTrue(mtEntry.getSourcePath().equals("/user"));
+
+mtEntry = mountTable.getMountPoint("/user/a");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a/");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a/11");
+assertTrue(mtEntry.getSourcePath().equals("/user/a"));
+
+mtEntry = mountTable.getMountPoint("/user/a1");
+assertTrue(mtEntry.getSourcePath().equals("/user"));
+  }
+
+  @Test
   public void testGetMountPoints() throws IOException {
 
 // Check getting all mount points (virtual and real) beneath a path


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org