hadoop git commit: HDFS-13100. Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs. Contributed by Aaron T. Myers and Yongjun Zhang. (cherry picked from commit 4e9a59c

2018-02-02 Thread yjzhangal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7cd3770fe -> 6dfea0504


HDFS-13100. Handle IllegalArgumentException when GETSERVERDEFAULTS is not 
implemented in webhdfs. Contributed by Aaron T. Myers and Yongjun Zhang.
(cherry picked from commit 4e9a59ce16e81b4bd6fae443a997ef24d588a6e8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6dfea050
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6dfea050
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6dfea050

Branch: refs/heads/branch-2
Commit: 6dfea05049c1367dee6311cbc2c0c24df1dbae8e
Parents: 7cd3770
Author: Yongjun Zhang 
Authored: Fri Feb 2 22:58:44 2018 -0800
Committer: Yongjun Zhang 
Committed: Fri Feb 2 23:33:10 2018 -0800

--
 .../org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java| 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6dfea050/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index ab3ed66..3ec680e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -1911,8 +1911,17 @@ public class WebHdfsFileSystem extends FileSystem
 try {
   keyProviderUri = getServerDefaults().getKeyProviderUri();
 } catch (UnsupportedOperationException e) {
-  // This means server doesn't supports GETSERVERDEFAULTS call.
+  // This means server doesn't support GETSERVERDEFAULTS call.
   // Do nothing, let keyProviderUri = null.
+} catch (RemoteException e) {
+  if (e.getClassName() != null &&
+  e.getClassName().equals("java.lang.IllegalArgumentException")) {
+// See HDFS-13100.
+// This means server doesn't support GETSERVERDEFAULTS call.
+// Do nothing, let keyProviderUri = null.
+  } else {
+throw e;
+  }
 }
 return HdfsKMSUtil.getKeyProviderUri(ugi, getUri(), keyProviderUri,
 getConf());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13100. Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs. Contributed by Aaron T. Myers and Yongjun Zhang. (cherry picked from commit 4e9a59c

2018-02-02 Thread yjzhangal
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0.1 7142d8734 -> 0524fac21


HDFS-13100. Handle IllegalArgumentException when GETSERVERDEFAULTS is not 
implemented in webhdfs. Contributed by Aaron T. Myers and Yongjun Zhang.
(cherry picked from commit 4e9a59ce16e81b4bd6fae443a997ef24d588a6e8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0524fac2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0524fac2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0524fac2

Branch: refs/heads/branch-3.0.1
Commit: 0524fac21ef8a8e440a2fe759936e6fb6a63cbf1
Parents: 7142d87
Author: Yongjun Zhang 
Authored: Fri Feb 2 22:58:44 2018 -0800
Committer: Yongjun Zhang 
Committed: Fri Feb 2 23:12:41 2018 -0800

--
 .../org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java| 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0524fac2/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index d05cfec..f680062 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -1852,8 +1852,17 @@ public class WebHdfsFileSystem extends FileSystem
 try {
   keyProviderUri = getServerDefaults().getKeyProviderUri();
 } catch (UnsupportedOperationException e) {
-  // This means server doesn't supports GETSERVERDEFAULTS call.
+  // This means server doesn't support GETSERVERDEFAULTS call.
   // Do nothing, let keyProviderUri = null.
+} catch (RemoteException e) {
+  if (e.getClassName() != null &&
+  e.getClassName().equals("java.lang.IllegalArgumentException")) {
+// See HDFS-13100.
+// This means server doesn't support GETSERVERDEFAULTS call.
+// Do nothing, let keyProviderUri = null.
+  } else {
+throw e;
+  }
 }
 return HdfsKMSUtil.getKeyProviderUri(ugi, getUri(), keyProviderUri,
 getConf());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13100. Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs. Contributed by Aaron T. Myers and Yongjun Zhang.

2018-02-02 Thread yjzhangal
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2021f4bdc -> 4e9a59ce1


HDFS-13100. Handle IllegalArgumentException when GETSERVERDEFAULTS is not 
implemented in webhdfs. Contributed by Aaron T. Myers and Yongjun Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4e9a59ce
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4e9a59ce
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4e9a59ce

Branch: refs/heads/trunk
Commit: 4e9a59ce16e81b4bd6fae443a997ef24d588a6e8
Parents: 2021f4b
Author: Yongjun Zhang 
Authored: Fri Feb 2 22:58:44 2018 -0800
Committer: Yongjun Zhang 
Committed: Fri Feb 2 22:58:44 2018 -0800

--
 .../org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java| 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e9a59ce/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index b006495..0db5af7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -1869,8 +1869,17 @@ public class WebHdfsFileSystem extends FileSystem
 try {
   keyProviderUri = getServerDefaults().getKeyProviderUri();
 } catch (UnsupportedOperationException e) {
-  // This means server doesn't supports GETSERVERDEFAULTS call.
+  // This means server doesn't support GETSERVERDEFAULTS call.
   // Do nothing, let keyProviderUri = null.
+} catch (RemoteException e) {
+  if (e.getClassName() != null &&
+  e.getClassName().equals("java.lang.IllegalArgumentException")) {
+// See HDFS-13100.
+// This means server doesn't support GETSERVERDEFAULTS call.
+// Do nothing, let keyProviderUri = null.
+  } else {
+throw e;
+  }
 }
 return HdfsKMSUtil.getKeyProviderUri(ugi, getUri(), keyProviderUri,
 getConf());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11187. Optimize disk access for last partial chunk checksum of Finalized replica. Contributed by Wei-Chiu Chuang.

2018-02-02 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/trunk c7101fe21 -> 2021f4bdc


HDFS-11187. Optimize disk access for last partial chunk checksum of Finalized 
replica. Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2021f4bd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2021f4bd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2021f4bd

Branch: refs/heads/trunk
Commit: 2021f4bdce3b27c46edaad198f0007a26a8a1391
Parents: c7101fe
Author: Wei-Chiu Chuang 
Authored: Fri Feb 2 17:15:26 2018 -0800
Committer: Wei-Chiu Chuang 
Committed: Fri Feb 2 17:18:42 2018 -0800

--
 .../hdfs/server/datanode/BlockSender.java   | 56 +++
 .../hdfs/server/datanode/FinalizedReplica.java  | 74 
 .../hdfs/server/datanode/ReplicaBuilder.java| 11 ++-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |  1 +
 .../datanode/fsdataset/impl/FsVolumeImpl.java   | 21 --
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 23 ++
 .../namenode/TestListCorruptFileBlocks.java |  4 +-
 7 files changed, 140 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2021f4bd/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
index 3ff5c75..268007f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
@@ -175,8 +175,13 @@ class BlockSender implements java.io.Closeable {
* See {{@link BlockSender#isLongRead()}
*/
   private static final long LONG_READ_THRESHOLD_BYTES = 256 * 1024;
-  
 
+  // The number of bytes per checksum here determines the alignment
+  // of reads: we always start reading at a checksum chunk boundary,
+  // even if the checksum type is NULL. So, choosing too big of a value
+  // would risk sending too much unnecessary data. 512 (1 disk sector)
+  // is likely to result in minimal extra IO.
+  private static final long CHUNK_SIZE = 512;
   /**
* Constructor
* 
@@ -250,18 +255,16 @@ class BlockSender implements java.io.Closeable {
   try(AutoCloseableLock lock = datanode.data.acquireDatasetLock()) {
 replica = getReplica(block, datanode);
 replicaVisibleLength = replica.getVisibleLength();
-if (replica instanceof FinalizedReplica) {
-  // Load last checksum in case the replica is being written
-  // concurrently
-  final FinalizedReplica frep = (FinalizedReplica) replica;
-  chunkChecksum = frep.getLastChecksumAndDataLen();
-}
   }
   if (replica.getState() == ReplicaState.RBW) {
 final ReplicaInPipeline rbw = (ReplicaInPipeline) replica;
 waitForMinLength(rbw, startOffset + length);
 chunkChecksum = rbw.getLastChecksumAndDataLen();
   }
+  if (replica instanceof FinalizedReplica) {
+chunkChecksum = getPartialChunkChecksumForFinalized(
+(FinalizedReplica)replica);
+  }
 
   if (replica.getGenerationStamp() < block.getGenerationStamp()) {
 throw new IOException("Replica gen stamp < block genstamp, block="
@@ -348,12 +351,8 @@ class BlockSender implements java.io.Closeable {
 }
   }
   if (csum == null) {
-// The number of bytes per checksum here determines the alignment
-// of reads: we always start reading at a checksum chunk boundary,
-// even if the checksum type is NULL. So, choosing too big of a value
-// would risk sending too much unnecessary data. 512 (1 disk sector)
-// is likely to result in minimal extra IO.
-csum = DataChecksum.newDataChecksum(DataChecksum.Type.NULL, 512);
+csum = DataChecksum.newDataChecksum(DataChecksum.Type.NULL,
+(int)CHUNK_SIZE);
   }
 
   /*
@@ -427,6 +426,37 @@ class BlockSender implements java.io.Closeable {
 }
   }
 
+  private ChunkChecksum getPartialChunkChecksumForFinalized(
+  FinalizedReplica finalized) throws IOException {
+// There are a number of places in the code base where a finalized replica
+// object is created. If last partial checksum is loaded whenever a
+// finalized replica is created, it would increase latency in DataNode
+// initialization. Therefore, the last partial chunk checksum is loaded
+// lazily.
+
+// Load 

hadoop git commit: YARN-7879. NM user is unable to access the application filecache due to permissions. Contributed by Jason Lowe.

2018-02-02 Thread szegedim
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0ef639235 -> c7101fe21


YARN-7879. NM user is unable to access the application filecache due to 
permissions. Contributed by Jason Lowe.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c7101fe2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c7101fe2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c7101fe2

Branch: refs/heads/trunk
Commit: c7101fe21ba7b9aa589f0a9266ed34356f30b35f
Parents: 0ef6392
Author: Miklos Szegedi 
Authored: Fri Feb 2 11:38:21 2018 -0800
Committer: Miklos Szegedi 
Committed: Fri Feb 2 16:48:57 2018 -0800

--
 .../src/main/java/org/apache/hadoop/yarn/util/FSDownload.java | 7 ++-
 .../test/java/org/apache/hadoop/yarn/util/TestFSDownload.java | 2 +-
 2 files changed, 3 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7101fe2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
index 508440a..d203f65 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
@@ -401,7 +401,7 @@ public class FSDownload implements Callable {
 }
 
 final Path destinationTmp = new Path(destDirPath + "_tmp");
-createDir(destinationTmp, PRIVATE_DIR_PERMS);
+createDir(destinationTmp, cachePerms);
 Path dFinal =
 files.makeQualified(new Path(destinationTmp, sCopy.getName()));
 try {
@@ -416,10 +416,7 @@ public class FSDownload implements Callable {
   }
 });
   }
-  Path destinationTmpfilesQualified = files.makeQualified(destinationTmp);
-  changePermissions(
-  destinationTmpfilesQualified.getFileSystem(conf),
-  destinationTmpfilesQualified);
+  changePermissions(dFinal.getFileSystem(conf), dFinal);
   files.rename(destinationTmp, destDirPath, Rename.OVERWRITE);
 
   if (LOG.isDebugEnabled()) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7101fe2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java
index fa8c039..08d6189 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java
@@ -451,7 +451,7 @@ public class TestFSDownload {
 FileStatus status = files.getFileStatus(localized.getParent());
 FsPermission perm = status.getPermission();
 assertEquals("Cache directory permissions are incorrect",
-new FsPermission((short)0700), perm);
+new FsPermission((short)0755), perm);
 
 status = files.getFileStatus(localized);
 perm = status.getPermission();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13073. Cleanup code in InterQJournalProtocol.proto. Contributed by Bharat Viswanadham.

2018-02-02 Thread hanishakoneru
Repository: hadoop
Updated Branches:
  refs/heads/trunk 50723889c -> 0ef639235


HDFS-13073. Cleanup code in InterQJournalProtocol.proto. Contributed by Bharat 
Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0ef63923
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0ef63923
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0ef63923

Branch: refs/heads/trunk
Commit: 0ef639235b305d684cbe46818613320b3fa62d44
Parents: 5072388
Author: Hanisha Koneru 
Authored: Fri Feb 2 16:28:03 2018 -0800
Committer: Hanisha Koneru 
Committed: Fri Feb 2 16:28:03 2018 -0800

--
 .../protocol/InterQJournalProtocol.java |  4 ++--
 ...rQJournalProtocolServerSideTranslatorPB.java | 11 +-
 .../InterQJournalProtocolTranslatorPB.java  | 13 ++--
 .../qjournal/server/JournalNodeRpcServer.java   | 21 ++--
 .../hdfs/qjournal/server/JournalNodeSyncer.java |  8 
 .../src/main/proto/InterQJournalProtocol.proto  | 16 ++-
 6 files changed, 29 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0ef63923/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocol/InterQJournalProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocol/InterQJournalProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocol/InterQJournalProtocol.java
index 94caeba..f1f7e9c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocol/InterQJournalProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocol/InterQJournalProtocol.java
@@ -21,7 +21,7 @@ package org.apache.hadoop.hdfs.qjournal.protocol;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.qjournal.server.JournalNode;
-import 
org.apache.hadoop.hdfs.qjournal.protocol.InterQJournalProtocolProtos.GetEditLogManifestFromJournalResponseProto;
+import 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos.GetEditLogManifestResponseProto;
 import org.apache.hadoop.security.KerberosInfo;
 
 import java.io.IOException;
@@ -47,7 +47,7 @@ public interface InterQJournalProtocol {
*segment
* @return a list of edit log segments since the given transaction ID.
*/
-  GetEditLogManifestFromJournalResponseProto getEditLogManifestFromJournal(
+  GetEditLogManifestResponseProto getEditLogManifestFromJournal(
   String jid, String nameServiceId, long sinceTxId, boolean inProgressOk)
   throws IOException;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0ef63923/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocolPB/InterQJournalProtocolServerSideTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocolPB/InterQJournalProtocolServerSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocolPB/InterQJournalProtocolServerSideTranslatorPB.java
index 15d6387..d4f97d9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocolPB/InterQJournalProtocolServerSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocolPB/InterQJournalProtocolServerSideTranslatorPB.java
@@ -24,8 +24,8 @@ import com.google.protobuf.ServiceException;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.qjournal.protocol.InterQJournalProtocol;
-import 
org.apache.hadoop.hdfs.qjournal.protocol.InterQJournalProtocolProtos.GetEditLogManifestFromJournalRequestProto;
-import 
org.apache.hadoop.hdfs.qjournal.protocol.InterQJournalProtocolProtos.GetEditLogManifestFromJournalResponseProto;
+import 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos.GetEditLogManifestRequestProto;
+import 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos.GetEditLogManifestResponseProto;
 
 import java.io.IOException;
 
@@ -47,10 +47,9 @@ public class InterQJournalProtocolServerSideTranslatorPB 
implements
   }
 
   @Override
-  public GetEditLogManifestFromJournalResponseProto
-  getEditLogManifestFromJournal(RpcController controller,
-GetEditLogManifestFromJournalRequestProto
-request) throws ServiceException {
+  public 

hadoop git commit: YARN-7778. Merging of placement constraints defined at different levels. Contributed by Weiwei Yang.

2018-02-02 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/trunk b6e50fad5 -> 50723889c


YARN-7778. Merging of placement constraints defined at different levels. 
Contributed by Weiwei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/50723889
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/50723889
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/50723889

Branch: refs/heads/trunk
Commit: 50723889cc29e8dadfa6ab6afbb90ac798d66878
Parents: b6e50fa
Author: Konstantinos Karanasos 
Authored: Fri Feb 2 14:43:54 2018 -0800
Committer: Konstantinos Karanasos 
Committed: Fri Feb 2 14:46:20 2018 -0800

--
 .../MemoryPlacementConstraintManager.java   | 42 ++
 .../constraint/PlacementConstraintManager.java  | 13 
 .../constraint/PlacementConstraintsUtil.java| 24 ++
 .../TestPlacementConstraintManagerService.java  | 82 
 ...stSingleConstraintAppPlacementAllocator.java |  5 ++
 5 files changed, 150 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/50723889/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
index ceff6f6..5cb8b99 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
@@ -24,6 +24,8 @@ import java.util.Collections;
 import java.util.HashMap;
 import java.util.Map;
 import java.util.Set;
+import java.util.List;
+import java.util.ArrayList;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 import java.util.stream.Collectors;
 import java.util.stream.Stream;
@@ -33,6 +35,7 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.resource.PlacementConstraint;
+import org.apache.hadoop.yarn.api.resource.PlacementConstraints;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -237,6 +240,45 @@ public class MemoryPlacementConstraintManager
   }
 
   @Override
+  public PlacementConstraint getMultilevelConstraint(ApplicationId appId,
+  Set sourceTags, PlacementConstraint schedulingRequestConstraint) 
{
+List constraints = new ArrayList<>();
+// Add scheduling request-level constraint.
+if (schedulingRequestConstraint != null) {
+  constraints.add(schedulingRequestConstraint);
+}
+// Add app-level constraint if appId is given.
+if (appId != null && sourceTags != null
+&& !sourceTags.isEmpty()) {
+  constraints.add(getConstraint(appId, sourceTags));
+}
+// Add global constraint.
+if (sourceTags != null && !sourceTags.isEmpty()) {
+  constraints.add(getGlobalConstraint(sourceTags));
+}
+
+// Remove all null or duplicate constraints.
+List allConstraints =
+constraints.stream()
+.filter(placementConstraint -> placementConstraint != null
+&& placementConstraint.getConstraintExpr() != null)
+.map(PlacementConstraint::getConstraintExpr)
+.distinct()
+.collect(Collectors.toList());
+
+// Compose an AND constraint
+// When merge request(RC), app(AC) and global constraint(GC),
+// we do a merge on them with CC=AND(GC, AC, RC) and returns a
+// composite AND constraint. Subsequently we check if CC could
+// be satisfied. This ensures that every level of constraint
+// is satisfied.
+PlacementConstraint.And andConstraint = PlacementConstraints.and(
+allConstraints.toArray(new PlacementConstraint
+.AbstractConstraint[allConstraints.size()]));
+return andConstraint.build();
+  }
+
+  @Override
   public void unregisterApplication(ApplicationId appId) {
 try {
   writeLock.lock();


hadoop git commit: YARN-7831. YARN Service CLI should use hadoop.http.authentication.type to determine authentication method. Contributed by Eric Yang

2018-02-02 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 51cb6c538 -> b6e50fad5


YARN-7831. YARN Service CLI should use hadoop.http.authentication.type to 
determine authentication method. Contributed by Eric Yang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b6e50fad
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b6e50fad
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b6e50fad

Branch: refs/heads/trunk
Commit: b6e50fad53f26e2b718a85ec0678e3161decc691
Parents: 51cb6c5
Author: Jian He 
Authored: Fri Feb 2 14:37:58 2018 -0800
Committer: Jian He 
Committed: Fri Feb 2 14:38:33 2018 -0800

--
 .../org/apache/hadoop/yarn/service/client/ApiServiceClient.java | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b6e50fad/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
index 8c2edb5..cb91946 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
@@ -123,7 +123,8 @@ public class ApiServiceClient extends AppAdminClient {
   api.append("/");
   api.append(appName);
 }
-if (!UserGroupInformation.isSecurityEnabled()) {
+Configuration conf = getConfig();
+if 
(conf.get("hadoop.http.authentication.type").equalsIgnoreCase("simple")) {
   api.append("?user.name=" + UrlEncoded
   .encodeString(System.getProperty("user.name")));
 }
@@ -147,7 +148,7 @@ public class ApiServiceClient extends AppAdminClient {
 client.setChunkedEncodingSize(null);
 Builder builder = client
 .resource(getApiUrl(appName)).type(MediaType.APPLICATION_JSON);
-if (conf.get("hadoop.security.authentication").equals("kerberos")) {
+if (conf.get("hadoop.http.authentication.type").equals("kerberos")) {
   AuthenticatedURL.Token token = new AuthenticatedURL.Token();
   builder.header("WWW-Authenticate", token);
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15168. Add kdiag tool to hadoop command. Contributed by Bharat Viswanadham.

2018-02-02 Thread hanishakoneru
Repository: hadoop
Updated Branches:
  refs/heads/trunk d4e13a464 -> 51cb6c538


HADOOP-15168. Add kdiag tool to hadoop command. Contributed by Bharat 
Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/51cb6c53
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/51cb6c53
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/51cb6c53

Branch: refs/heads/trunk
Commit: 51cb6c5380e8bf2537b4dbda311a9f1b458d60cc
Parents: d4e13a4
Author: Hanisha Koneru 
Authored: Fri Feb 2 12:51:27 2018 -0800
Committer: Hanisha Koneru 
Committed: Fri Feb 2 12:51:27 2018 -0800

--
 .../hadoop-common/src/main/bin/hadoop   |  4 +++
 .../hadoop-common/src/main/bin/hadoop.cmd   |  7 -
 .../src/site/markdown/CommandsManual.md |  6 
 .../src/site/markdown/SecureMode.md | 32 
 4 files changed, 28 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/51cb6c53/hadoop-common-project/hadoop-common/src/main/bin/hadoop
--
diff --git a/hadoop-common-project/hadoop-common/src/main/bin/hadoop 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop
index 1e57185..fa78ec3 100755
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop
@@ -45,6 +45,7 @@ function hadoop_usage
   hadoop_add_subcommand "key" client "manage keys via the KeyProvider"
   hadoop_add_subcommand "trace" client "view and modify Hadoop tracing 
settings"
   hadoop_add_subcommand "version" client "print the version"
+  hadoop_add_subcommand "kdiag" client "Diagnose Kerberos Problems"
   hadoop_generate_usage "${HADOOP_SHELL_EXECNAME}" true
 }
 
@@ -148,6 +149,9 @@ function hadoopcmd_case
 kerbname)
   HADOOP_CLASSNAME=org.apache.hadoop.security.HadoopKerberosName
 ;;
+kdiag)
+  HADOOP_CLASSNAME=org.apache.hadoop.security.KDiag
+;;
 key)
   HADOOP_CLASSNAME=org.apache.hadoop.crypto.key.KeyShell
 ;;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/51cb6c53/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd
--
diff --git a/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd
index a21ebe6..91c65d1 100644
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd
@@ -149,7 +149,7 @@ call :updatepath %HADOOP_BIN_PATH%
 exit /b
   )
 
-  set corecommands=fs version jar checknative conftest distch distcp daemonlog 
archive classpath credential kerbname key trace
+  set corecommands=fs version jar checknative conftest distch distcp daemonlog 
archive classpath credential kerbname key trace kdiag
   for %%i in ( %corecommands% ) do (
 if %hadoop-command% == %%i set corecommand=true  
   )
@@ -231,6 +231,10 @@ call :updatepath %HADOOP_BIN_PATH%
   set CLASS=org.apache.hadoop.security.HadoopKerberosName
   goto :eof
 
+:kdiag
+  set CLASS=org.apache.hadoop.security.KDiag
+  goto :eof
+
 :key
   set CLASS=org.apache.hadoop.crypto.key.KeyShell
   goto :eof
@@ -307,6 +311,7 @@ call :updatepath %HADOOP_BIN_PATH%
   @echo   credential   interact with credential providers
   @echo   jnipath  prints the java.library.path
   @echo   kerbname show auth_to_local principal conversion
+  @echo   kdiagdiagnose kerberos problems
   @echo   key  manage keys via the KeyProvider
   @echo   traceview and modify Hadoop tracing settings
   @echo   daemonlogget/set the log level for each daemon

http://git-wip-us.apache.org/repos/asf/hadoop/blob/51cb6c53/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
index a63a4ba..2839503 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
@@ -187,6 +187,12 @@ user name.
 
 Example: `hadoop kerbname u...@example.com`
 
+### `kdiag`
+
+Usage: `hadoop kdiag`
+
+Diagnose Kerberos Problems
+
 ### `key`
 
 Usage: `hadoop key  [options]`

http://git-wip-us.apache.org/repos/asf/hadoop/blob/51cb6c53/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
--
diff --git 

hadoop git commit: HADOOP-15198. Correct the spelling in CopyFilter.java. Contributed by Mukul Kumar Singh.

2018-02-02 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/trunk f9a4d4cf2 -> d4e13a464


HADOOP-15198. Correct the spelling in CopyFilter.java. Contributed by Mukul 
Kumar Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d4e13a46
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d4e13a46
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d4e13a46

Branch: refs/heads/trunk
Commit: d4e13a4647f6d2d073b628870136f52a5a5ab074
Parents: f9a4d4c
Author: Arpit Agarwal 
Authored: Fri Feb 2 11:37:51 2018 -0800
Committer: Arpit Agarwal 
Committed: Fri Feb 2 11:37:51 2018 -0800

--
 .../src/main/java/org/apache/hadoop/tools/CopyFilter.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d4e13a46/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
index 3da364c..4b348a5 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
@@ -43,7 +43,7 @@ public abstract class CopyFilter {
* Public factory method which returns the appropriate implementation of
* CopyFilter.
*
-   * @param conf DistCp configuratoin
+   * @param conf DistCp configuration
* @return An instance of the appropriate CopyFilter
*/
   public static CopyFilter getCopyFilter(Configuration conf) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-7868. Provide improved error message when YARN service is disabled. Contributed by Eric Yang

2018-02-02 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 6e5ba9366 -> f9a4d4cf2


YARN-7868. Provide improved error message when YARN service is disabled. 
Contributed by Eric Yang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f9a4d4cf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f9a4d4cf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f9a4d4cf

Branch: refs/heads/trunk
Commit: f9a4d4cf237d64ccb52ab1792372c89e9d36f41d
Parents: 6e5ba93
Author: Jian He 
Authored: Fri Feb 2 11:10:18 2018 -0800
Committer: Jian He 
Committed: Fri Feb 2 11:10:47 2018 -0800

--
 .../org/apache/hadoop/yarn/service/client/ApiServiceClient.java  | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f9a4d4cf/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
index 34e62b6..8c2edb5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
@@ -171,6 +171,10 @@ public class ApiServiceClient extends AppAdminClient {
   LOG.error("Authentication required");
   return EXIT_EXCEPTION_THROWN;
 }
+if (response.getStatus() == 503) {
+  LOG.error("YARN Service is unavailable or disabled.");
+  return EXIT_EXCEPTION_THROWN;
+}
 try {
   ServiceStatus ss = response.getEntity(ServiceStatus.class);
   output = ss.getDiagnostics();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-7839. Modify PlacementAlgorithm to Check node capacity before placing request on node. (Panagiotis Garefalakis via asuresh)

2018-02-02 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/trunk 460d77bd6 -> 6e5ba9366


YARN-7839. Modify PlacementAlgorithm to Check node capacity before placing 
request on node. (Panagiotis Garefalakis via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6e5ba936
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6e5ba936
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6e5ba936

Branch: refs/heads/trunk
Commit: 6e5ba9366fc05719906ff2789b1a0fd26001182b
Parents: 460d77b
Author: Arun Suresh 
Authored: Fri Feb 2 10:28:22 2018 -0800
Committer: Arun Suresh 
Committed: Fri Feb 2 10:28:22 2018 -0800

--
 .../scheduler/capacity/CapacityScheduler.java   |  4 -
 .../algorithm/DefaultPlacementAlgorithm.java| 61 ++
 .../api/ConstraintPlacementAlgorithmOutput.java |  5 +-
 .../SchedulingRequestWithPlacementAttempt.java  | 52 
 .../constraint/processor/BatchedRequests.java   |  2 +-
 .../processor/PlacementDispatcher.java  | 12 +--
 .../processor/PlacementProcessor.java   | 28 +--
 .../constraint/TestPlacementProcessor.java  | 87 +++-
 8 files changed, 215 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e5ba936/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index cb01351..d3aa5cb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -2609,10 +2609,6 @@ public class CapacityScheduler extends
 " but only 1 will be attempted !!");
   }
   if (!appAttempt.isStopped()) {
-Resource resource =
-schedulingRequest.getResourceSizing().getResources();
-schedulingRequest.getResourceSizing().setResources(
-getNormalizedResource(resource));
 ResourceCommitRequest
 resourceCommitRequest = createResourceCommitRequest(
 appAttempt, schedulingRequest, schedulerNode);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e5ba936/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
index 4e6473f..710e6c0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
@@ -18,10 +18,15 @@
 package 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.algorithm;
 
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.Iterator;
 import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
 
 import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.api.records.ResourceSizing;
 import org.apache.hadoop.yarn.api.records.SchedulingRequest;
 import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
@@ -35,8 

hadoop git commit: HADOOP-15170. Add symlink support to FileUtil#unTarUsingJava. Contributed by Ajay Kumar

2018-02-02 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4aef8bd2e -> 460d77bd6


HADOOP-15170. Add symlink support to FileUtil#unTarUsingJava. Contributed by 
Ajay Kumar


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/460d77bd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/460d77bd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/460d77bd

Branch: refs/heads/trunk
Commit: 460d77bd646d03d3eb7670f9017aeeb5410c4a95
Parents: 4aef8bd
Author: Jason Lowe 
Authored: Fri Feb 2 11:31:39 2018 -0600
Committer: Jason Lowe 
Committed: Fri Feb 2 11:33:26 2018 -0600

--
 .../java/org/apache/hadoop/fs/FileUtil.java | 12 ++-
 .../java/org/apache/hadoop/fs/TestFileUtil.java | 86 
 2 files changed, 97 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/460d77bd/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
index bf9b146..8743be5 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
@@ -34,6 +34,8 @@ import java.net.URI;
 import java.net.UnknownHostException;
 import java.nio.charset.Charset;
 import java.nio.file.AccessDeniedException;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
 import java.util.ArrayList;
 import java.util.Enumeration;
 import java.util.List;
@@ -894,7 +896,7 @@ public class FileUtil {
 }
   }
 
-  private static void unTarUsingJava(File inFile, File untarDir,
+  static void unTarUsingJava(File inFile, File untarDir,
   boolean gzipped) throws IOException {
 InputStream inputStream = null;
 TarArchiveInputStream tis = null;
@@ -956,6 +958,14 @@ public class FileUtil {
   return;
 }
 
+if (entry.isSymbolicLink()) {
+  // Create symbolic link relative to tar parent dir
+  Files.createSymbolicLink(FileSystems.getDefault()
+  .getPath(outputDir.getPath(), entry.getName()),
+  FileSystems.getDefault().getPath(entry.getLinkName()));
+  return;
+}
+
 File outputFile = new File(outputDir, entry.getName());
 if (!outputFile.getParentFile().exists()) {
   if (!outputFile.getParentFile().mkdirs()) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/460d77bd/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
index 0ad03fc..39f2f6b 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
@@ -26,6 +26,7 @@ import static org.junit.Assert.assertTrue;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
+import java.io.BufferedInputStream;
 import java.io.File;
 import java.io.FileInputStream;
 import java.io.FileOutputStream;
@@ -37,6 +38,8 @@ import java.net.URI;
 import java.net.URISyntaxException;
 import java.net.URL;
 import java.net.UnknownHostException;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
@@ -47,6 +50,9 @@ import java.util.jar.Manifest;
 import java.util.zip.ZipEntry;
 import java.util.zip.ZipOutputStream;
 
+import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
+import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream;
+import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.util.StringUtils;
@@ -1173,4 +1179,84 @@ public class TestFileUtil {
 assertEquals(FileUtil.compareFs(fs3,fs4),true);
 assertEquals(FileUtil.compareFs(fs1,fs6),false);
   }
+
+  @Test(timeout = 8000)
+  public void testCreateSymbolicLinkUsingJava() throws IOException {
+setupDirs();
+final File simpleTar = new File(del, FILE);
+OutputStream os = new FileOutputStream(simpleTar);
+TarArchiveOutputStream tos = new TarArchiveOutputStream(os);
+File untarFile = null;
+try {
+  // Files to tar
+  final String tmpDir = "tmp/test";
+  File tmpDir1 = new 

hadoop git commit: HDFS-13068. RBF: Add router admin option to manage safe mode. Contributed by Yiqun Lin.

2018-02-02 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7b8cc048c -> 7cd3770fe


HDFS-13068. RBF: Add router admin option to manage safe mode. Contributed by 
Yiqun Lin.

(cherry picked from commit b0627c891b0e90e29dab2bec64a01c2c2ffe4ed0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7cd3770f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7cd3770f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7cd3770f

Branch: refs/heads/branch-2
Commit: 7cd3770fee5f0c81bea6b12db7d5de22eb9fae7e
Parents: 7b8cc04
Author: Yiqun Lin 
Authored: Fri Feb 2 11:25:41 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Feb 2 17:38:32 2018 +0800

--
 ...uterAdminProtocolServerSideTranslatorPB.java | 60 
 .../RouterAdminProtocolTranslatorPB.java| 60 +++-
 .../federation/router/RouterAdminServer.java| 46 +++-
 .../server/federation/router/RouterClient.java  |  4 ++
 .../federation/router/RouterStateManager.java   | 50 +
 .../store/protocol/EnterSafeModeRequest.java| 32 +
 .../store/protocol/EnterSafeModeResponse.java   | 50 +
 .../store/protocol/GetSafeModeRequest.java  | 31 
 .../store/protocol/GetSafeModeResponse.java | 49 +
 .../store/protocol/LeaveSafeModeRequest.java| 32 +
 .../store/protocol/LeaveSafeModeResponse.java   | 50 +
 .../impl/pb/EnterSafeModeRequestPBImpl.java | 62 
 .../impl/pb/EnterSafeModeResponsePBImpl.java| 73 +++
 .../impl/pb/GetSafeModeRequestPBImpl.java   | 62 
 .../impl/pb/GetSafeModeResponsePBImpl.java  | 73 +++
 .../impl/pb/LeaveSafeModeRequestPBImpl.java | 62 
 .../impl/pb/LeaveSafeModeResponsePBImpl.java| 73 +++
 .../hdfs/tools/federation/RouterAdmin.java  | 75 +++-
 .../src/main/proto/FederationProtocol.proto | 25 +++
 .../src/main/proto/RouterProtocol.proto | 15 
 .../src/site/markdown/HDFSCommands.md   |  2 +
 .../src/site/markdown/HDFSRouterFederation.md   |  6 +-
 .../federation/router/TestRouterAdminCLI.java   | 48 +
 23 files changed, 1036 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7cd3770f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
index 415bbd9..159d5c2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
@@ -23,8 +23,14 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.AddMountTableEntryRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.AddMountTableEntryResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.UpdateMountTableEntryRequestProto;
@@ -32,16 +38,28 @@ 

hadoop git commit: HDFS-13068. RBF: Add router admin option to manage safe mode. Contributed by Yiqun Lin.

2018-02-02 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 6769c78cd -> 024127e47


HDFS-13068. RBF: Add router admin option to manage safe mode. Contributed by 
Yiqun Lin.

(cherry picked from commit 712e9381d2e3592953a7f9d66540864441842cbf)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/024127e4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/024127e4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/024127e4

Branch: refs/heads/branch-2.9
Commit: 024127e471103956f2cb4558b4dbbd9e74dc04f8
Parents: 6769c78
Author: Yiqun Lin 
Authored: Fri Feb 2 17:34:12 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Feb 2 17:37:14 2018 +0800

--
 ...uterAdminProtocolServerSideTranslatorPB.java | 60 
 .../RouterAdminProtocolTranslatorPB.java| 60 +++-
 .../federation/router/RouterAdminServer.java| 46 +++-
 .../server/federation/router/RouterClient.java  |  4 ++
 .../federation/router/RouterStateManager.java   | 50 +
 .../store/protocol/EnterSafeModeRequest.java| 32 +
 .../store/protocol/EnterSafeModeResponse.java   | 50 +
 .../store/protocol/GetSafeModeRequest.java  | 31 
 .../store/protocol/GetSafeModeResponse.java | 49 +
 .../store/protocol/LeaveSafeModeRequest.java| 32 +
 .../store/protocol/LeaveSafeModeResponse.java   | 50 +
 .../impl/pb/EnterSafeModeRequestPBImpl.java | 62 
 .../impl/pb/EnterSafeModeResponsePBImpl.java| 73 +++
 .../impl/pb/GetSafeModeRequestPBImpl.java   | 62 
 .../impl/pb/GetSafeModeResponsePBImpl.java  | 73 +++
 .../impl/pb/LeaveSafeModeRequestPBImpl.java | 62 
 .../impl/pb/LeaveSafeModeResponsePBImpl.java| 73 +++
 .../hdfs/tools/federation/RouterAdmin.java  | 75 +++-
 .../src/main/proto/FederationProtocol.proto | 25 +++
 .../src/main/proto/RouterProtocol.proto | 15 
 .../src/site/markdown/HDFSCommands.md   |  2 +
 .../src/site/markdown/HDFSRouterFederation.md   |  6 +-
 .../federation/router/TestRouterAdminCLI.java   | 48 +
 23 files changed, 1036 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/024127e4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
index 415bbd9..159d5c2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
@@ -23,8 +23,14 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.AddMountTableEntryRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.AddMountTableEntryResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.UpdateMountTableEntryRequestProto;
@@ -32,16 +38,28 @@ 

hadoop git commit: HDFS-13068. RBF: Add router admin option to manage safe mode. Contributed by Yiqun Lin.

2018-02-02 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 01661d4f9 -> 712e9381d


HDFS-13068. RBF: Add router admin option to manage safe mode. Contributed by 
Yiqun Lin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/712e9381
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/712e9381
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/712e9381

Branch: refs/heads/branch-3.0
Commit: 712e9381d2e3592953a7f9d66540864441842cbf
Parents: 01661d4
Author: Yiqun Lin 
Authored: Fri Feb 2 17:34:12 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Feb 2 17:34:12 2018 +0800

--
 ...uterAdminProtocolServerSideTranslatorPB.java | 60 
 .../RouterAdminProtocolTranslatorPB.java| 60 +++-
 .../federation/router/RouterAdminServer.java| 46 +++-
 .../server/federation/router/RouterClient.java  |  4 ++
 .../federation/router/RouterStateManager.java   | 50 +
 .../store/protocol/EnterSafeModeRequest.java| 32 +
 .../store/protocol/EnterSafeModeResponse.java   | 50 +
 .../store/protocol/GetSafeModeRequest.java  | 31 
 .../store/protocol/GetSafeModeResponse.java | 49 +
 .../store/protocol/LeaveSafeModeRequest.java| 32 +
 .../store/protocol/LeaveSafeModeResponse.java   | 50 +
 .../impl/pb/EnterSafeModeRequestPBImpl.java | 62 
 .../impl/pb/EnterSafeModeResponsePBImpl.java| 73 +++
 .../impl/pb/GetSafeModeRequestPBImpl.java   | 62 
 .../impl/pb/GetSafeModeResponsePBImpl.java  | 73 +++
 .../impl/pb/LeaveSafeModeRequestPBImpl.java | 62 
 .../impl/pb/LeaveSafeModeResponsePBImpl.java| 73 +++
 .../hdfs/tools/federation/RouterAdmin.java  | 75 +++-
 .../src/main/proto/FederationProtocol.proto | 25 +++
 .../src/main/proto/RouterProtocol.proto | 15 
 .../src/site/markdown/HDFSCommands.md   |  2 +
 .../src/site/markdown/HDFSRouterFederation.md   |  6 +-
 .../federation/router/TestRouterAdminCLI.java   | 48 +
 23 files changed, 1036 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/712e9381/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
index 415bbd9..159d5c2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
@@ -23,8 +23,14 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.AddMountTableEntryRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.AddMountTableEntryResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.UpdateMountTableEntryRequestProto;
@@ -32,16 +38,28 @@ import