hadoop git commit: HDDS-392. Incomplete description about auditMap#key in AuditLogging Framework. Contributed by Dinesh Chitlangia.

2018-08-31 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/trunk 76bae4ccb -> 19abaacda


HDDS-392. Incomplete description about auditMap#key in AuditLogging Framework.
Contributed by  Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/19abaacd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/19abaacd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/19abaacd

Branch: refs/heads/trunk
Commit: 19abaacdad84b03fc790341b4b5bcf1c4d41f1fb
Parents: 76bae4c
Author: Anu Engineer 
Authored: Fri Aug 31 22:24:30 2018 -0700
Committer: Anu Engineer 
Committed: Fri Aug 31 22:24:30 2018 -0700

--
 .../main/java/org/apache/hadoop/ozone/audit/package-info.java  | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/19abaacd/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/package-info.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/package-info.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/package-info.java
index 48de3f7..9c00ef7 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/package-info.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/package-info.java
@@ -50,8 +50,10 @@ package org.apache.hadoop.ozone.audit;
  * The implementing class must override toAuditMap() to return an
  * instance of Map where both Key and Value are String.
  *
- * Key: must not contain any spaces. If the key is multi word then use
- * camel case.
+ * Key: must contain printable US ASCII characters
+ * May not contain a space, =, ], or "
+ * If the key is multi word then use camel case.
+ *
  * Value: if it is a collection/array, then it must be converted to a comma
  * delimited string
  *


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[2/2] hadoop git commit: HDDS-379. Simplify and improve the cli arg parsing of ozone scmcli. Contributed by Elek, Marton.

2018-08-31 Thread aengineer
HDDS-379. Simplify and improve the cli arg parsing of ozone scmcli.
Contributed by Elek, Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/76bae4cc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/76bae4cc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/76bae4cc

Branch: refs/heads/trunk
Commit: 76bae4ccb1d929260038b1869be8070c2320b617
Parents: 50d2e3e
Author: Anu Engineer 
Authored: Fri Aug 31 18:11:01 2018 -0700
Committer: Anu Engineer 
Committed: Fri Aug 31 18:11:01 2018 -0700

--
 .../common/dev-support/findbugsExcludeFile.xml  |   4 +
 .../org/apache/hadoop/hdds/cli/GenericCli.java  |  82 +++
 .../hadoop/hdds/cli/HddsVersionProvider.java|  35 ++
 .../apache/hadoop/hdds/cli/package-info.java|  22 +
 hadoop-hdds/pom.xml |   5 +
 .../hadoop/hdds/scm/cli/OzoneBaseCLI.java   |  43 --
 .../hdds/scm/cli/OzoneCommandHandler.java   |  87 
 .../apache/hadoop/hdds/scm/cli/ResultCode.java  |  31 --
 .../org/apache/hadoop/hdds/scm/cli/SCMCLI.java  | 246 +++--
 .../cli/container/CloseContainerHandler.java|  85 ---
 .../hdds/scm/cli/container/CloseSubcommand.java |  54 ++
 .../cli/container/ContainerCommandHandler.java  | 128 -
 .../cli/container/CreateContainerHandler.java   |  67 ---
 .../scm/cli/container/CreateSubcommand.java |  65 +++
 .../cli/container/DeleteContainerHandler.java   |  95 
 .../scm/cli/container/DeleteSubcommand.java |  60 +++
 .../scm/cli/container/InfoContainerHandler.java | 114 
 .../hdds/scm/cli/container/InfoSubcommand.java  |  94 
 .../scm/cli/container/ListContainerHandler.java | 117 -
 .../hdds/scm/cli/container/ListSubcommand.java  |  83 +++
 .../hdds/scm/cli/container/package-info.java|   3 +
 .../hadoop/hdds/scm/cli/package-info.java   |  12 +-
 hadoop-ozone/common/src/main/bin/ozone  |   2 +-
 .../org/apache/hadoop/ozone/scm/TestSCMCli.java | 518 ---
 24 files changed, 596 insertions(+), 1456 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/76bae4cc/hadoop-hdds/common/dev-support/findbugsExcludeFile.xml
--
diff --git a/hadoop-hdds/common/dev-support/findbugsExcludeFile.xml 
b/hadoop-hdds/common/dev-support/findbugsExcludeFile.xml
index daf6fec..c7db679 100644
--- a/hadoop-hdds/common/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-hdds/common/dev-support/findbugsExcludeFile.xml
@@ -21,4 +21,8 @@
   
 
   
+  
+
+
+  
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/76bae4cc/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/cli/GenericCli.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/cli/GenericCli.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/cli/GenericCli.java
new file mode 100644
index 000..2b3e6c0
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/cli/GenericCli.java
@@ -0,0 +1,82 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdds.cli;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+
+import picocli.CommandLine;
+import picocli.CommandLine.ExecutionException;
+import picocli.CommandLine.Option;
+import picocli.CommandLine.ParameterException;
+import picocli.CommandLine.RunLast;
+
+/**
+ * This is a generic parent class for all the ozone related cli tools.
+ */
+public class GenericCli implements Callable {
+
+  @Option(names = {"--verbose"},
+  description = "More verbose output. Show the stack trace of the errors.")
+  private boolean verbose;
+
+  @Option(names = {"-D", "--set"})
+  private Map configurationOverrides = new HashMap<>();
+
+  private final CommandLine cmd;
+
+  public GenericCli() {
+cmd = new CommandLine(this);
+  }
+
+  public void 

[1/2] hadoop git commit: HDDS-379. Simplify and improve the cli arg parsing of ozone scmcli. Contributed by Elek, Marton.

2018-08-31 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/trunk 50d2e3ec4 -> 76bae4ccb


http://git-wip-us.apache.org/repos/asf/hadoop/blob/76bae4cc/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
deleted file mode 100644
index 722c1a5..000
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
+++ /dev/null
@@ -1,518 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.ozone.scm;
-
-import com.google.common.primitives.Longs;
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import 
org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
-import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
-import org.apache.hadoop.io.IOUtils;
-import org.apache.hadoop.ozone.MiniOzoneCluster;
-import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.ozone.container.ContainerTestHelper;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
-import org.apache.hadoop.hdds.scm.cli.ResultCode;
-import org.apache.hadoop.hdds.scm.cli.SCMCLI;
-import org.apache.hadoop.hdds.scm.XceiverClientManager;
-import org.apache.hadoop.hdds.scm.client.ContainerOperationClient;
-import org.apache.hadoop.hdds.scm.client.ScmClient;
-import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
-import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
-import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
-
-import org.apache.hadoop.ozone.container.common.impl.ContainerData;
-import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
-import org.apache.hadoop.ozone.container.keyvalue.helpers.KeyUtils;
-import org.junit.AfterClass;
-import org.junit.Assert;
-import org.junit.BeforeClass;
-import org.junit.Ignore;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.Timeout;
-
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.io.PrintStream;
-import java.util.ArrayList;
-import java.util.List;
-
-import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState.CLOSED;
-import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState.OPEN;
-
-import static org.apache.hadoop.hdds.scm.cli.ResultCode.EXECUTION_ERROR;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.assertFalse;
-
-/**
- * This class tests the CLI of SCM.
- */
-@Ignore ("Needs to be fixed for new SCM and Storage design")
-public class TestSCMCli {
-  private static SCMCLI cli;
-
-  private static MiniOzoneCluster cluster;
-  private static OzoneConfiguration conf;
-  private static StorageContainerLocationProtocolClientSideTranslatorPB
-  storageContainerLocationClient;
-
-  private static StorageContainerManager scm;
-  private static ScmClient containerOperationClient;
-
-  private static ByteArrayOutputStream outContent;
-  private static PrintStream outStream;
-  private static ByteArrayOutputStream errContent;
-  private static PrintStream errStream;
-  private static XceiverClientManager xceiverClientManager;
-  private static String containerOwner = "OZONE";
-
-  @Rule
-  public Timeout globalTimeout = new Timeout(3);
-
-  @BeforeClass
-  public static void setup() throws Exception {
-conf = new OzoneConfiguration();
-cluster = MiniOzoneCluster.newBuilder(conf).setNumDatanodes(3).build();
-cluster.waitForClusterToBeReady();
-xceiverClientManager = new XceiverClientManager(conf);
-storageContainerLocationClient =
-cluster.getStorageContainerLocationClient();
-containerOperationClient = new ContainerOperationClient(
-storageContainerLocationClient, new XceiverClientManager(conf));
-outContent = new ByteArrayOutputStream();
-outStream = new 

hadoop git commit: HDDS-388. Fix the name of the db profile configuration key. Contributed by Elek, Marton.

2018-08-31 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/trunk 630b64ec7 -> 50d2e3ec4


HDDS-388. Fix the name of the db profile configuration key.
Contributed by Elek, Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/50d2e3ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/50d2e3ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/50d2e3ec

Branch: refs/heads/trunk
Commit: 50d2e3ec41c73f9a0198d4a4e3d6f308d3030b8a
Parents: 630b64e
Author: Anu Engineer 
Authored: Fri Aug 31 14:30:29 2018 -0700
Committer: Anu Engineer 
Committed: Fri Aug 31 14:30:29 2018 -0700

--
 hadoop-hdds/common/src/main/resources/ozone-default.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/50d2e3ec/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 6d2ee09..d3ec4a5 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -1100,7 +1100,7 @@
   
 
   
-ozone.db.profile
+hdds.db.profile
 DBProfile.SSD
 OZONE, OM, PERFORMANCE, REQUIRED
 This property allows user to pick a configuration


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-98. Adding Ozone Manager Audit Log. Contributed by Dinesh Chitlangia.

2018-08-31 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8aa6c4f07 -> 630b64ec7


HDDS-98. Adding Ozone Manager Audit Log.
Contributed by Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/630b64ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/630b64ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/630b64ec

Branch: refs/heads/trunk
Commit: 630b64ec7e963968a5bdcd1d625fc78746950137
Parents: 8aa6c4f
Author: Anu Engineer 
Authored: Fri Aug 31 14:20:56 2018 -0700
Committer: Anu Engineer 
Committed: Fri Aug 31 14:20:56 2018 -0700

--
 .../src/main/compose/ozone/docker-config|  37 
 .../org/apache/hadoop/ozone/OzoneConsts.java|  32 +++
 hadoop-ozone/common/src/main/bin/ozone  |   2 +
 .../src/main/conf/om-audit-log4j2.properties|  86 
 .../org/apache/hadoop/ozone/audit/OMAction.java |  25 ++-
 .../hadoop/ozone/om/helpers/OmBucketArgs.java   |  25 ++-
 .../hadoop/ozone/om/helpers/OmBucketInfo.java   |  21 +-
 .../hadoop/ozone/om/helpers/OmKeyArgs.java  |  22 +-
 .../hadoop/ozone/om/helpers/OmVolumeArgs.java   |  16 +-
 .../apache/hadoop/ozone/om/OzoneManager.java| 218 ++-
 10 files changed, 466 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/630b64ec/hadoop-dist/src/main/compose/ozone/docker-config
--
diff --git a/hadoop-dist/src/main/compose/ozone/docker-config 
b/hadoop-dist/src/main/compose/ozone/docker-config
index a1828a3..21127f8 100644
--- a/hadoop-dist/src/main/compose/ozone/docker-config
+++ b/hadoop-dist/src/main/compose/ozone/docker-config
@@ -31,3 +31,40 @@ 
LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
 LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
HH:mm:ss} %-5p %c{1}:%L - %m%n
 #Enable this variable to print out all hadoop rpc traffic to the stdout. See 
http://byteman.jboss.org/ to define your own instrumentation.
 
#BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm
+
+#LOG4J2.PROPERTIES_* are for Ozone Audit Logging
+LOG4J2.PROPERTIES_monitorInterval=30
+LOG4J2.PROPERTIES_filter=read,write
+LOG4J2.PROPERTIES_filter.read.type=MarkerFilter
+LOG4J2.PROPERTIES_filter.read.marker=READ
+LOG4J2.PROPERTIES_filter.read.onMatch=DENY
+LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL
+LOG4J2.PROPERTIES_filter.write.type=MarkerFilter
+LOG4J2.PROPERTIES_filter.write.marker=WRITE
+LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL
+LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL
+LOG4J2.PROPERTIES_appenders=console, rolling
+LOG4J2.PROPERTIES_appender.console.type=Console
+LOG4J2.PROPERTIES_appender.console.name=STDOUT
+LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout
+LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | 
%c{1} | %msg | %throwable{3} %n
+LOG4J2.PROPERTIES_appender.rolling.type=RollingFile
+LOG4J2.PROPERTIES_appender.rolling.name=RollingFile
+LOG4J2.PROPERTIES_appender.rolling.fileName 
=${sys:hadoop.log.dir}/om-audit-${hostName}.log
+LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{-MM-dd-HH-mm-ss}-%i.log.gz
+LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout
+LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | 
%c{1} | %msg | %throwable{3} %n
+LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies
+LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy
+LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400
+LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy
+LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB
+LOG4J2.PROPERTIES_loggers=audit
+LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger
+LOG4J2.PROPERTIES_logger.audit.name=OMAudit
+LOG4J2.PROPERTIES_logger.audit.level=INFO
+LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling
+LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile
+LOG4J2.PROPERTIES_rootLogger.level=INFO
+LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout
+LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT

http://git-wip-us.apache.org/repos/asf/hadoop/blob/630b64ec/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
index 15366fb..9645c02 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
+++ 

[46/47] hadoop git commit: Merge branch 'trunk' into HDFS-12943

2018-08-31 Thread xkrogen
Merge branch 'trunk' into HDFS-12943


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/53201734
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/53201734
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/53201734

Branch: refs/heads/HDFS-12943
Commit: 53201734f5d888b892118a3f8d873ac01c209de4
Parents: 191faeb 8aa6c4f
Author: Erik Krogen 
Authored: Fri Aug 31 09:06:54 2018 -0700
Committer: Erik Krogen 
Committed: Fri Aug 31 09:06:54 2018 -0700

--
 dev-support/bin/ozone-dist-layout-stitching |6 +-
 dev-support/bin/ozone-dist-tar-stitching|9 +-
 .../hadoop/fs/FileSystemMultipartUploader.java  |6 +-
 .../org/apache/hadoop/fs/MultipartUploader.java |   11 +
 .../apache/hadoop/fs/TrashPolicyDefault.java|   14 +
 .../src/site/markdown/Compatibility.md  |2 +-
 .../site/markdown/InterfaceClassification.md|2 +-
 .../java/org/apache/hadoop/fs/TestTrash.java|   54 +
 .../AbstractContractMultipartUploaderTest.java  |   43 +
 .../crypto/key/kms/server/KMSConfiguration.java |   31 +
 .../hadoop/crypto/key/kms/server/KMSWebApp.java |   38 +-
 .../crypto/key/kms/server/KMSWebServer.java |1 +
 .../apache/hadoop/hdds/scm/XceiverClient.java   |6 +-
 .../hadoop/hdds/scm/XceiverClientGrpc.java  |8 +-
 .../hadoop/hdds/scm/XceiverClientManager.java   |8 +-
 .../scm/client/ContainerOperationClient.java|9 +
 .../hdds/scm/storage/ChunkInputStream.java  |7 +-
 .../hdds/scm/storage/ChunkOutputStream.java |   44 +-
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |6 +
 .../org/apache/hadoop/hdds/client/BlockID.java  |5 +-
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |4 -
 .../hadoop/hdds/scm/XceiverClientSpi.java   |2 -
 .../hadoop/hdds/scm/client/ScmClient.java   |9 +-
 .../common/helpers/AllocatedBlock.java  |4 +-
 .../container/common/helpers/ContainerInfo.java |   12 +-
 .../common/helpers/ContainerWithPipeline.java   |7 +-
 .../scm/container/common/helpers/Pipeline.java  |   11 +-
 .../StorageContainerLocationProtocol.java   |6 +-
 ...rLocationProtocolClientSideTranslatorPB.java |   21 +-
 .../scm/storage/ContainerProtocolCalls.java |6 +-
 .../apache/hadoop/ozone/OzoneConfigKeys.java|7 -
 .../org/apache/hadoop/ozone/OzoneConsts.java|8 +-
 .../apache/hadoop/ozone/audit/AuditLogger.java  |   66 +-
 .../apache/hadoop/ozone/audit/AuditMessage.java |   64 +
 .../apache/hadoop/ozone/audit/package-info.java |   19 +-
 .../ozone/container/common/helpers/KeyData.java |8 +-
 .../apache/hadoop/utils/HddsVersionInfo.java|6 +-
 .../hadoop/utils/db/DBConfigFromFile.java   |  134 +++
 .../org/apache/hadoop/utils/db/DBProfile.java   |  120 ++
 .../apache/hadoop/utils/db/DBStoreBuilder.java  |  201 
 .../org/apache/hadoop/utils/db/RDBStore.java|   32 +-
 .../org/apache/hadoop/utils/db/TableConfig.java |   93 ++
 .../common/src/main/resources/ozone-default.xml |   40 +-
 .../ozone/audit/TestOzoneAuditLogger.java   |  124 +-
 .../apache/hadoop/utils/TestMetadataStore.java  |1 -
 .../hadoop/utils/db/TestDBConfigFromFile.java   |  116 ++
 .../hadoop/utils/db/TestDBStoreBuilder.java |  174 +++
 .../apache/hadoop/utils/db/TestRDBStore.java|   17 +-
 .../hadoop/utils/db/TestRDBTableStore.java  |   11 +-
 .../common/src/test/resources/test.db.ini   |  145 +++
 .../hadoop/ozone/HddsDatanodeService.java   |3 +-
 .../common/helpers/ContainerUtils.java  |   22 +-
 .../container/common/impl/ContainerData.java|   24 +-
 .../common/impl/ContainerDataYaml.java  |5 +-
 .../container/common/impl/ContainerSet.java |2 +-
 .../container/common/impl/HddsDispatcher.java   |6 +-
 .../common/impl/OpenContainerBlockMap.java  |   19 +-
 .../transport/server/GrpcXceiverService.java|8 +-
 .../transport/server/XceiverServerGrpc.java |2 +-
 .../transport/server/ratis/CSMMetrics.java  |  115 ++
 .../server/ratis/ContainerStateMachine.java |   33 +
 .../server/ratis/XceiverServerRatis.java|6 +-
 .../container/keyvalue/KeyValueContainer.java   |2 +-
 .../keyvalue/KeyValueContainerData.java |   10 +-
 .../container/keyvalue/KeyValueHandler.java |   15 +-
 .../keyvalue/interfaces/KeyManager.java |4 +-
 .../container/ozoneimpl/OzoneContainer.java |   11 +-
 .../ozone/protocol/commands/CommandStatus.java  |   16 +-
 .../ozone/container/common/ScmTestMock.java |6 +-
 .../common/TestKeyValueContainerData.java   |5 +-
 .../common/impl/TestContainerDataYaml.java  |7 +-
 .../container/common/impl/TestContainerSet.java |7 +-
 .../common/impl/TestHddsDispatcher.java |3 +-
 .../common/interfaces/TestHandler.java  |7 -
 

[42/47] hadoop git commit: HADOOP-15107. Stabilize/tune S3A committers; review correctness & docs. Contributed by Steve Loughran.

2018-08-31 Thread xkrogen
HADOOP-15107. Stabilize/tune S3A committers; review correctness & docs.
Contributed by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5a0babf7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5a0babf7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5a0babf7

Branch: refs/heads/HDFS-12943
Commit: 5a0babf76550f63dad4c17173c4da2bf335c6532
Parents: e8d138c
Author: Steve Loughran 
Authored: Thu Aug 30 14:49:53 2018 +0100
Committer: Steve Loughran 
Committed: Thu Aug 30 14:49:53 2018 +0100

--
 .../lib/output/PathOutputCommitter.java |  12 +-
 .../java/org/apache/hadoop/fs/s3a/Invoker.java  |  15 +-
 .../fs/s3a/commit/AbstractS3ACommitter.java |  16 +-
 .../fs/s3a/commit/S3ACommitterFactory.java  |  18 +-
 .../s3a/commit/magic/MagicS3GuardCommitter.java |   7 +
 .../staging/DirectoryStagingCommitter.java  |   8 +-
 .../staging/PartitionedStagingCommitter.java|   9 +-
 .../hadoop/fs/s3a/commit/staging/Paths.java |  14 +-
 .../fs/s3a/commit/staging/StagingCommitter.java |  50 -
 .../tools/hadoop-aws/committer_architecture.md  |  94 ++---
 .../markdown/tools/hadoop-aws/committers.md |   2 +-
 .../fs/s3a/commit/AbstractCommitITest.java  |  19 ++
 .../fs/s3a/commit/AbstractITCommitMRJob.java|   5 +-
 .../fs/s3a/commit/AbstractITCommitProtocol.java |  63 --
 .../fs/s3a/commit/ITestS3ACommitterFactory.java | 200 +++
 .../fs/s3a/commit/magic/ITMagicCommitMRJob.java |   6 +-
 .../commit/magic/ITestMagicCommitProtocol.java  |  25 ++-
 .../ITStagingCommitMRJobBadDest.java|  62 ++
 .../integration/ITestStagingCommitProtocol.java |  13 ++
 19 files changed, 542 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a0babf7/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/PathOutputCommitter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/PathOutputCommitter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/PathOutputCommitter.java
index 3679d9f..5e25f50 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/PathOutputCommitter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/PathOutputCommitter.java
@@ -57,8 +57,8 @@ public abstract class PathOutputCommitter extends 
OutputCommitter {
   protected PathOutputCommitter(Path outputPath,
   TaskAttemptContext context) throws IOException {
 this.context = Preconditions.checkNotNull(context, "Null context");
-LOG.debug("Creating committer with output path {} and task context"
-+ " {}", outputPath, context);
+LOG.debug("Instantiating committer {} with output path {} and task context"
++ " {}", this, outputPath, context);
   }
 
   /**
@@ -71,8 +71,8 @@ public abstract class PathOutputCommitter extends 
OutputCommitter {
   protected PathOutputCommitter(Path outputPath,
   JobContext context) throws IOException {
 this.context = Preconditions.checkNotNull(context, "Null context");
-LOG.debug("Creating committer with output path {} and job context"
-+ " {}", outputPath, context);
+LOG.debug("Instantiating committer {} with output path {} and job context"
++ " {}", this, outputPath, context);
   }
 
   /**
@@ -103,6 +103,8 @@ public abstract class PathOutputCommitter extends 
OutputCommitter {
 
   @Override
   public String toString() {
-return "PathOutputCommitter{context=" + context + '}';
+return "PathOutputCommitter{context=" + context
++ "; " + super.toString()
++ '}';
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a0babf7/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java
index a007ba1..45912a0 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java
@@ -130,8 +130,9 @@ public class Invoker {
   }
 
   /**
-   * Execute an operation and ignore all raised IOExceptions; log at INFO.
-   * @param log log to log at info.
+   * 

[35/47] hadoop git commit: YARN-8642. Add support for tmpfs mounts with the Docker runtime. Contributed by Craig Condit

2018-08-31 Thread xkrogen
YARN-8642. Add support for tmpfs mounts with the Docker runtime. Contributed by 
Craig Condit


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/73625168
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/73625168
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/73625168

Branch: refs/heads/HDFS-12943
Commit: 73625168c0f29aa646d7a715c9fb15e43d6c7e05
Parents: a0ebb6b
Author: Shane Kumpf 
Authored: Wed Aug 29 07:08:37 2018 -0600
Committer: Shane Kumpf 
Committed: Wed Aug 29 07:08:37 2018 -0600

--
 .../hadoop/yarn/conf/YarnConfiguration.java |   5 +
 .../src/main/resources/yarn-default.xml |   7 +
 .../runtime/DockerLinuxContainerRuntime.java|  38 +
 .../linux/runtime/docker/DockerRunCommand.java  |   5 +
 .../container-executor/impl/utils/docker-util.c |  42 ++
 .../container-executor/impl/utils/docker-util.h |   3 +-
 .../test/utils/test_docker_util.cc  |  64 
 .../runtime/TestDockerContainerRuntime.java | 149 +++
 .../runtime/docker/TestDockerRunCommand.java|   5 +-
 .../src/site/markdown/DockerContainers.md   |   1 +
 10 files changed, 317 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/73625168/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 148edb9..d525e4d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2012,6 +2012,11 @@ public class YarnConfiguration extends Configuration {
   public static final String NM_DOCKER_DEFAULT_RW_MOUNTS =
   DOCKER_CONTAINER_RUNTIME_PREFIX + "default-rw-mounts";
 
+  /** The default list of tmpfs mounts to be mounted into all
+   *  Docker containers that use DockerContainerRuntime. */
+  public static final String NM_DOCKER_DEFAULT_TMPFS_MOUNTS =
+  DOCKER_CONTAINER_RUNTIME_PREFIX + "default-tmpfs-mounts";
+
   /** The mode in which the Java Container Sandbox should run detailed by
*  the JavaSandboxLinuxContainerRuntime. */
   public static final String YARN_CONTAINER_SANDBOX =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/73625168/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 72e42d8..4262436 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -1828,6 +1828,13 @@
   
 
   
+The default list of tmpfs mounts to be mounted into all Docker
+  containers that use DockerContainerRuntime.
+yarn.nodemanager.runtime.linux.docker.default-tmpfs-mounts
+
+  
+
+  
 The mode in which the Java Container Sandbox should run 
detailed by
   the JavaSandboxLinuxContainerRuntime.
 yarn.nodemanager.runtime.linux.sandbox-mode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/73625168/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
index 00771ff..0ae3d0f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
+++ 

[41/47] hadoop git commit: HADOOP-15680. ITestNativeAzureFileSystemConcurrencyLive times out. Contributed by Andras Bokor.

2018-08-31 Thread xkrogen
HADOOP-15680. ITestNativeAzureFileSystemConcurrencyLive times out.
Contributed by Andras Bokor.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e8d138ca
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e8d138ca
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e8d138ca

Branch: refs/heads/HDFS-12943
Commit: e8d138ca7c1b695688515d816ac693437c87df62
Parents: 2e6c110
Author: Steve Loughran 
Authored: Thu Aug 30 14:36:00 2018 +0100
Committer: Steve Loughran 
Committed: Thu Aug 30 14:36:00 2018 +0100

--
 .../hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e8d138ca/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
index 87cac15..1c868ea 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
@@ -39,7 +39,7 @@ public class ITestNativeAzureFileSystemConcurrencyLive
 extends AbstractWasbTestBase {
 
   private static final int THREAD_COUNT = 102;
-  private static final int TEST_EXECUTION_TIMEOUT = 5000;
+  private static final int TEST_EXECUTION_TIMEOUT = 3;
 
   @Override
   protected AzureBlobStorageTestAccount createTestAccount() throws Exception {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[26/47] hadoop git commit: HDDS-382. Remove RatisTestHelper#RatisTestSuite constructor argument and fix checkstyle in ContainerTestHelper, GenericTestUtils Contributed by Nandakumar.

2018-08-31 Thread xkrogen
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5629d54/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java
index 3b4426c..b652b6b 100644
--- 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java
+++ 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java
@@ -51,9 +51,9 @@ public class TestSCMContainerPlacementRandom {
 .thenReturn(new ArrayList<>(datanodes));
 
 when(mockNodeManager.getNodeStat(anyObject()))
-.thenReturn(new SCMNodeMetric(100l, 0l, 100l));
+.thenReturn(new SCMNodeMetric(100L, 0L, 100L));
 when(mockNodeManager.getNodeStat(datanodes.get(2)))
-.thenReturn(new SCMNodeMetric(100l, 90l, 10l));
+.thenReturn(new SCMNodeMetric(100L, 90L, 10L));
 
 SCMContainerPlacementRandom scmContainerPlacementRandom =
 new SCMContainerPlacementRandom(mockNodeManager, conf);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5629d54/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationManager.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationManager.java
index fa87706..da05913 100644
--- 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationManager.java
+++ 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationManager.java
@@ -21,7 +21,6 @@ import java.util.ArrayList;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Objects;
-import java.util.UUID;
 
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
@@ -132,7 +131,7 @@ public class TestReplicationManager {
   //WHEN
 
   queue.fireEvent(SCMEvents.REPLICATE_CONTAINER,
-  new ReplicationRequest(1l, (short) 2, System.currentTimeMillis(),
+  new ReplicationRequest(1L, (short) 2, System.currentTimeMillis(),
   (short) 3));
 
   Thread.sleep(500L);
@@ -159,10 +158,8 @@ public class TestReplicationManager {
   leaseManager.start();
 
   ReplicationManager replicationManager =
-  new ReplicationManager(containerPlacementPolicy, 
containerStateManager,
-
-
-  queue, leaseManager) {
+  new ReplicationManager(containerPlacementPolicy,
+  containerStateManager, queue, leaseManager) {
 @Override
 protected List getCurrentReplicas(
 ReplicationRequest request) throws IOException {
@@ -172,7 +169,7 @@ public class TestReplicationManager {
   replicationManager.start();
 
   queue.fireEvent(SCMEvents.REPLICATE_CONTAINER,
-  new ReplicationRequest(1l, (short) 2, System.currentTimeMillis(),
+  new ReplicationRequest(1L, (short) 2, System.currentTimeMillis(),
   (short) 3));
 
   Thread.sleep(500L);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5629d54/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationQueue.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationQueue.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationQueue.java
index a593718..9dd4fe3 100644
--- 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationQueue.java
+++ 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationQueue.java
@@ -92,8 +92,8 @@ public class TestReplicationQueue {
 1, replicationQueue.size());
 Assert.assertEquals(temp, msg5);
 
-// Message 2 should be ordered before message 5 as both have same 
replication
-// number but message 2 has earlier timestamp.
+// Message 2 should be ordered before message 5 as both have same
+// replication number but message 2 has earlier timestamp.
 temp = replicationQueue.take();
 Assert.assertEquals("Should have 0 objects",
 replicationQueue.size(), 0);


[43/47] hadoop git commit: HADOOP-15706. Typo in compatibility doc: SHOUD -> SHOULD (Contributed by Laszlo Kollar via Daniel Templeton)

2018-08-31 Thread xkrogen
HADOOP-15706. Typo in compatibility doc: SHOUD -> SHOULD
(Contributed by Laszlo Kollar via Daniel Templeton)

Change-Id: I6e2459d0700df7f3bad4eac8297a11690191c3ba


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f2c2a68e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f2c2a68e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f2c2a68e

Branch: refs/heads/HDFS-12943
Commit: f2c2a68ec208f640e778fc41f95f0284fcc44729
Parents: 5a0babf
Author: Daniel Templeton 
Authored: Thu Aug 30 09:12:36 2018 -0700
Committer: Daniel Templeton 
Committed: Thu Aug 30 09:12:36 2018 -0700

--
 .../hadoop-common/src/site/markdown/Compatibility.md   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f2c2a68e/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md
index 6b17c62..03d162a 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md
@@ -187,7 +187,7 @@ existing documentation and tests and/or adding new 
documentation or tests.
 
  Java Binary compatibility for end-user applications i.e. Apache Hadoop ABI
 
-Apache Hadoop revisions SHOUD retain binary compatability such that end-user
+Apache Hadoop revisions SHOULD retain binary compatability such that end-user
 applications continue to work without any modifications. Minor Apache Hadoop
 revisions within the same major revision MUST retain compatibility such that
 existing MapReduce applications (e.g. end-user applications and projects such


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[27/47] hadoop git commit: HDDS-382. Remove RatisTestHelper#RatisTestSuite constructor argument and fix checkstyle in ContainerTestHelper, GenericTestUtils Contributed by Nandakumar.

2018-08-31 Thread xkrogen
HDDS-382. Remove RatisTestHelper#RatisTestSuite constructor argument and fix 
checkstyle in ContainerTestHelper, GenericTestUtils
Contributed by Nandakumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c5629d54
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c5629d54
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c5629d54

Branch: refs/heads/HDFS-12943
Commit: c5629d546d64091a14560df488a7f797a150337e
Parents: 33f42ef
Author: Anu Engineer 
Authored: Tue Aug 28 14:06:19 2018 -0700
Committer: Anu Engineer 
Committed: Tue Aug 28 14:06:19 2018 -0700

--
 .../apache/hadoop/hdds/scm/XceiverClient.java   |  6 +--
 .../hadoop/hdds/scm/XceiverClientGrpc.java  |  6 +--
 .../hadoop/hdds/scm/XceiverClientManager.java   |  2 +-
 .../hdds/scm/storage/ChunkInputStream.java  |  7 +--
 .../hdds/scm/storage/ChunkOutputStream.java |  4 +-
 .../org/apache/hadoop/hdds/client/BlockID.java  |  5 +-
 .../hadoop/hdds/scm/XceiverClientSpi.java   |  2 -
 .../common/helpers/AllocatedBlock.java  |  4 +-
 .../container/common/helpers/ContainerInfo.java | 12 ++---
 .../common/helpers/ContainerWithPipeline.java   |  7 +--
 .../scm/container/common/helpers/Pipeline.java  | 11 ++---
 .../StorageContainerLocationProtocol.java   |  6 ++-
 ...rLocationProtocolClientSideTranslatorPB.java | 21 
 .../scm/storage/ContainerProtocolCalls.java |  6 +--
 .../org/apache/hadoop/ozone/OzoneConsts.java|  5 --
 .../ozone/container/common/helpers/KeyData.java |  8 ++--
 .../apache/hadoop/utils/HddsVersionInfo.java|  6 ++-
 .../apache/hadoop/utils/TestMetadataStore.java  |  1 -
 .../hadoop/ozone/HddsDatanodeService.java   |  3 +-
 .../common/helpers/ContainerUtils.java  | 22 -
 .../container/common/impl/ContainerSet.java |  2 +-
 .../common/impl/OpenContainerBlockMap.java  | 19 
 .../server/ratis/XceiverServerRatis.java|  6 +--
 .../keyvalue/interfaces/KeyManager.java |  4 +-
 .../ozone/protocol/commands/CommandStatus.java  | 16 +++
 .../ozone/container/common/ScmTestMock.java |  6 ++-
 .../common/interfaces/TestHandler.java  |  7 ---
 .../endpoint/TestHeartbeatEndpointTask.java |  2 -
 .../TestRoundRobinVolumeChoosingPolicy.java |  5 +-
 .../container/ozoneimpl/TestOzoneContainer.java |  3 +-
 .../hadoop/hdds/server/events/EventWatcher.java |  6 ++-
 .../hdds/server/events/TestEventQueue.java  |  3 --
 .../hadoop/hdds/scm/block/BlockManagerImpl.java | 18 +++
 .../hdds/scm/block/DeletedBlockLogImpl.java |  3 +-
 .../hdds/scm/block/SCMBlockDeletingService.java |  4 +-
 .../container/CloseContainerEventHandler.java   |  4 +-
 .../hdds/scm/container/ContainerMapping.java|  4 +-
 .../scm/container/ContainerStateManager.java|  7 +--
 .../replication/ReplicationManager.java |  2 +-
 .../scm/container/states/ContainerStateMap.java |  2 +-
 .../hdds/scm/node/states/Node2ContainerMap.java |  4 +-
 .../scm/node/states/NodeNotFoundException.java  |  2 -
 .../hdds/scm/node/states/ReportResult.java  |  3 +-
 .../hdds/scm/pipelines/Node2PipelineMap.java| 50 +---
 .../hdds/scm/pipelines/PipelineManager.java |  6 +--
 .../hdds/scm/pipelines/PipelineSelector.java|  7 +--
 .../scm/server/SCMClientProtocolServer.java |  3 +-
 .../org/apache/hadoop/hdds/scm/TestUtils.java   |  8 ++--
 .../hadoop/hdds/scm/block/TestBlockManager.java |  1 -
 .../hdds/scm/block/TestDeletedBlockLog.java |  7 +--
 .../command/TestCommandStatusReportHandler.java | 22 -
 .../TestCloseContainerEventHandler.java |  1 -
 .../scm/container/TestContainerMapping.java |  7 +--
 .../container/TestContainerReportHandler.java   |  2 +-
 .../TestSCMContainerPlacementCapacity.java  |  8 ++--
 .../TestSCMContainerPlacementRandom.java|  4 +-
 .../replication/TestReplicationManager.java | 11 ++---
 .../replication/TestReplicationQueue.java   |  4 +-
 .../hdds/scm/node/TestContainerPlacement.java   |  5 +-
 .../hadoop/hdds/scm/node/TestNodeManager.java   |  3 +-
 .../hdds/scm/node/TestNodeReportHandler.java|  3 +-
 .../ozone/container/common/TestEndPoint.java|  9 ++--
 .../placement/TestContainerPlacement.java   |  6 ++-
 .../apache/hadoop/ozone/client/ObjectStore.java |  7 ++-
 .../hdds/scm/pipeline/TestPipelineClose.java|  4 --
 .../apache/hadoop/ozone/RatisTestHelper.java|  8 ++--
 .../TestStorageContainerManagerHelper.java  |  2 -
 .../rpc/TestCloseContainerHandlingByClient.java |  3 +-
 .../ozone/container/ContainerTestHelper.java|  2 -
 .../common/impl/TestContainerPersistence.java   |  1 -
 .../ozoneimpl/TestOzoneContainerRatis.java  |  3 +-
 .../container/ozoneimpl/TestRatisManager.java   |  4 +-
 .../hadoop/ozone/scm/TestAllocateContainer.java |  2 -
 

[33/47] hadoop git commit: HDDS-380. Remove synchronization from ChunkGroupOutputStream and ChunkOutputStream. Contributed by Shashikant Banerjee.

2018-08-31 Thread xkrogen
HDDS-380. Remove synchronization from ChunkGroupOutputStream and 
ChunkOutputStream. Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0bd42171
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0bd42171
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0bd42171

Branch: refs/heads/HDFS-12943
Commit: 0bd4217194ae50ec30e386b200fcfa54c069f042
Parents: 3fa4639
Author: Nanda kumar 
Authored: Wed Aug 29 13:31:19 2018 +0530
Committer: Nanda kumar 
Committed: Wed Aug 29 13:31:19 2018 +0530

--
 .../hadoop/hdds/scm/storage/ChunkOutputStream.java  | 16 
 .../ozone/client/io/ChunkGroupOutputStream.java | 12 ++--
 2 files changed, 14 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bd42171/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
index f2df3fa..8d311d0 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
@@ -99,7 +99,7 @@ public class ChunkOutputStream extends OutputStream {
   }
 
   @Override
-  public synchronized void write(int b) throws IOException {
+  public void write(int b) throws IOException {
 checkOpen();
 int rollbackPosition = buffer.position();
 int rollbackLimit = buffer.limit();
@@ -110,7 +110,7 @@ public class ChunkOutputStream extends OutputStream {
   }
 
   @Override
-  public synchronized void write(byte[] b, int off, int len)
+  public void write(byte[] b, int off, int len)
   throws IOException {
 if (b == null) {
   throw new NullPointerException();
@@ -137,7 +137,7 @@ public class ChunkOutputStream extends OutputStream {
   }
 
   @Override
-  public synchronized void flush() throws IOException {
+  public void flush() throws IOException {
 checkOpen();
 if (buffer.position() > 0) {
   int rollbackPosition = buffer.position();
@@ -147,7 +147,7 @@ public class ChunkOutputStream extends OutputStream {
   }
 
   @Override
-  public synchronized void close() throws IOException {
+  public void close() throws IOException {
 if (xceiverClientManager != null && xceiverClient != null
 && buffer != null) {
   if (buffer.position() > 0) {
@@ -164,7 +164,7 @@ public class ChunkOutputStream extends OutputStream {
 }
   }
 
-  public synchronized void cleanup() {
+  public void cleanup() {
 xceiverClientManager.releaseClient(xceiverClient);
 xceiverClientManager = null;
 xceiverClient = null;
@@ -176,7 +176,7 @@ public class ChunkOutputStream extends OutputStream {
*
* @throws IOException if stream is closed
*/
-  private synchronized void checkOpen() throws IOException {
+  private void checkOpen() throws IOException {
 if (xceiverClient == null) {
   throw new IOException("ChunkOutputStream has been closed.");
 }
@@ -191,7 +191,7 @@ public class ChunkOutputStream extends OutputStream {
* @param rollbackLimit limit to restore in buffer if write fails
* @throws IOException if there is an I/O error while performing the call
*/
-  private synchronized void flushBufferToChunk(int rollbackPosition,
+  private void flushBufferToChunk(int rollbackPosition,
   int rollbackLimit) throws IOException {
 boolean success = false;
 try {
@@ -213,7 +213,7 @@ public class ChunkOutputStream extends OutputStream {
*
* @throws IOException if there is an I/O error while performing the call
*/
-  private synchronized void writeChunkToContainer() throws IOException {
+  private void writeChunkToContainer() throws IOException {
 buffer.flip();
 ByteString data = ByteString.copyFrom(buffer);
 ChunkInfo chunk = ChunkInfo

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bd42171/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
--
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
index 988af07..00624d5 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
@@ -105,7 +105,7 @@ public 

[23/47] hadoop git commit: HDFS-13861. RBF: Illegal Router Admin command leads to printing usage for all commands. Contributed by Ayush Saxena.

2018-08-31 Thread xkrogen
HDFS-13861. RBF: Illegal Router Admin command leads to printing usage for all 
commands. Contributed by Ayush Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cb9d371a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cb9d371a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cb9d371a

Branch: refs/heads/HDFS-12943
Commit: cb9d371ae2cda1624fc83316ddc09de37d8d0bd3
Parents: fd089ca
Author: Brahma Reddy Battula 
Authored: Wed Aug 29 00:29:05 2018 +0530
Committer: Brahma Reddy Battula 
Committed: Wed Aug 29 00:29:05 2018 +0530

--
 .../hdfs/tools/federation/RouterAdmin.java  | 92 +---
 .../federation/router/TestRouterAdminCLI.java   | 68 +++
 2 files changed, 130 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb9d371a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index f88d0a6..46be373 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -94,25 +94,58 @@ public class RouterAdmin extends Configured implements Tool 
{
* Print the usage message.
*/
   public void printUsage() {
-String usage = "Federation Admin Tools:\n"
-+ "\t[-add"
-+ "[-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] "
-+ "-owner  -group  -mode ]\n"
-+ "\t[-update
"
-+ "[-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] "
-+ "-owner  -group  -mode ]\n"
-+ "\t[-rm ]\n"
-+ "\t[-ls ]\n"
-+ "\t[-setQuota  -nsQuota  -ssQuota "
-+ "]\n"
-+ "\t[-clrQuota ]\n"
-+ "\t[-safemode enter | leave | get]\n"
-+ "\t[-nameservice enable | disable ]\n"
-+ "\t[-getDisabledNameservices]\n";
+String usage = getUsage(null);
+System.out.println(usage);
+  }
 
+  private void printUsage(String cmd) {
+String usage = getUsage(cmd);
 System.out.println(usage);
   }
 
+  private String getUsage(String cmd) {
+if (cmd == null) {
+  String[] commands =
+  {"-add", "-update", "-rm", "-ls", "-setQuota", "-clrQuota",
+  "-safemode", "-nameservice", "-getDisabledNameservices"};
+  StringBuilder usage = new StringBuilder();
+  usage.append("Usage: hdfs routeradmin :\n");
+  for (int i = 0; i < commands.length; i++) {
+usage.append(getUsage(commands[i]));
+if (i + 1 < commands.length) {
+  usage.append("\n");
+}
+  }
+  return usage.toString();
+}
+if (cmd.equals("-add")) {
+  return "\t[-add
"
+  + "[-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] "
+  + "-owner  -group  -mode ]";
+} else if (cmd.equals("-update")) {
+  return "\t[-update   "
+  + " "
+  + "[-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] "
+  + "-owner  -group  -mode ]";
+} else if (cmd.equals("-rm")) {
+  return "\t[-rm ]";
+} else if (cmd.equals("-ls")) {
+  return "\t[-ls ]";
+} else if (cmd.equals("-setQuota")) {
+  return "\t[-setQuota  -nsQuota  -ssQuota "
+  + "]";
+} else if (cmd.equals("-clrQuota")) {
+  return "\t[-clrQuota ]";
+} else if (cmd.equals("-safemode")) {
+  return "\t[-safemode enter | leave | get]";
+} else if (cmd.equals("-nameservice")) {
+  return "\t[-nameservice enable | disable ]";
+} else if (cmd.equals("-getDisabledNameservices")) {
+  return "\t[-getDisabledNameservices]";
+}
+return getUsage(null);
+  }
+
   @Override
   public int run(String[] argv) throws Exception {
 if (argv.length < 1) {
@@ -129,43 +162,43 @@ public class RouterAdmin extends Configured implements 
Tool {
 if ("-add".equals(cmd)) {
   if (argv.length < 4) {
 System.err.println("Not enough parameters specified for cmd " + cmd);
-printUsage();
+printUsage(cmd);
 return exitCode;
   }
 } else if ("-update".equals(cmd)) {
   if (argv.length < 4) {
 System.err.println("Not enough parameters specified for cmd " + cmd);
-printUsage();
+printUsage(cmd);
 return exitCode;
   }
-} else if ("-rm".equalsIgnoreCase(cmd)) {
+} else if ("-rm".equals(cmd)) {
   if (argv.length < 2) {
 System.err.println("Not enough parameters 

[45/47] hadoop git commit: Revert "HDFS-13838. WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status. Contributed by Siyao Meng."

2018-08-31 Thread xkrogen
Revert "HDFS-13838. WebHdfsFileSystem.getFileStatus() won't return correct 
"snapshot enabled" status. Contributed by Siyao Meng."

This reverts commit 26c2a97c566969f50eb8e8432009724c51152a98.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8aa6c4f0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8aa6c4f0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8aa6c4f0

Branch: refs/heads/HDFS-12943
Commit: 8aa6c4f079fd38a3230bc070c2ce837fefbc5301
Parents: c36d69a
Author: Wei-Chiu Chuang 
Authored: Thu Aug 30 11:44:20 2018 -0700
Committer: Wei-Chiu Chuang 
Committed: Thu Aug 30 11:44:20 2018 -0700

--
 .../java/org/apache/hadoop/hdfs/web/JsonUtilClient.java |  4 
 .../java/org/apache/hadoop/hdfs/web/TestWebHDFS.java| 12 
 2 files changed, 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8aa6c4f0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
index a685573..9bb1846 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
@@ -133,7 +133,6 @@ class JsonUtilClient {
 Boolean aclBit = (Boolean) m.get("aclBit");
 Boolean encBit = (Boolean) m.get("encBit");
 Boolean erasureBit  = (Boolean) m.get("ecBit");
-Boolean snapshotEnabledBit  = (Boolean) m.get("snapshotEnabled");
 EnumSet f =
 EnumSet.noneOf(HdfsFileStatus.Flags.class);
 if (aclBit != null && aclBit) {
@@ -145,9 +144,6 @@ class JsonUtilClient {
 if (erasureBit != null && erasureBit) {
   f.add(HdfsFileStatus.Flags.HAS_EC);
 }
-if (snapshotEnabledBit != null && snapshotEnabledBit) {
-  f.add(HdfsFileStatus.Flags.SNAPSHOT_ENABLED);
-}
 
 Map ecPolicyObj = (Map) m.get("ecPolicyObj");
 ErasureCodingPolicy ecPolicy = null;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8aa6c4f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
index 9152636..cbc428a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
@@ -482,9 +482,6 @@ public class TestWebHDFS {
 
   // allow snapshots on /bar using webhdfs
   webHdfs.allowSnapshot(bar);
-  // check if snapshot status is enabled
-  assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
-  assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
   webHdfs.createSnapshot(bar, "s1");
   final Path s1path = SnapshotTestHelper.getSnapshotRoot(bar, "s1");
   Assert.assertTrue(webHdfs.exists(s1path));
@@ -494,24 +491,15 @@ public class TestWebHDFS {
   assertEquals(bar, snapshottableDirs[0].getFullPath());
   dfs.deleteSnapshot(bar, "s1");
   dfs.disallowSnapshot(bar);
-  // check if snapshot status is disabled
-  assertFalse(dfs.getFileStatus(bar).isSnapshotEnabled());
-  assertFalse(webHdfs.getFileStatus(bar).isSnapshotEnabled());
   snapshottableDirs = dfs.getSnapshottableDirListing();
   assertNull(snapshottableDirs);
 
   // disallow snapshots on /bar using webhdfs
   dfs.allowSnapshot(bar);
-  // check if snapshot status is enabled, again
-  assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
-  assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
   snapshottableDirs = dfs.getSnapshottableDirListing();
   assertEquals(1, snapshottableDirs.length);
   assertEquals(bar, snapshottableDirs[0].getFullPath());
   webHdfs.disallowSnapshot(bar);
-  // check if snapshot status is disabled, again
-  assertFalse(dfs.getFileStatus(bar).isSnapshotEnabled());
-  assertFalse(webHdfs.getFileStatus(bar).isSnapshotEnabled());
   snapshottableDirs = dfs.getSnapshottableDirListing();
   assertNull(snapshottableDirs);
   try {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org


[36/47] hadoop git commit: HDDS-280. Support ozone dist-start-stitching on openbsd/osx. Contributed by Elek, Marton.

2018-08-31 Thread xkrogen
HDDS-280. Support ozone dist-start-stitching on openbsd/osx. Contributed by 
Elek, Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/692736f7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/692736f7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/692736f7

Branch: refs/heads/HDFS-12943
Commit: 692736f7cfb72b8932dc2eb4f4faa995dc6521f8
Parents: 7362516
Author: Mukul Kumar Singh 
Authored: Thu Aug 30 02:21:24 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Thu Aug 30 02:21:24 2018 +0530

--
 dev-support/bin/ozone-dist-layout-stitching   |  6 +++---
 dev-support/bin/ozone-dist-tar-stitching  |  9 ++---
 hadoop-ozone/acceptance-test/dev-support/bin/robot-all.sh |  2 +-
 .../acceptance-test/dev-support/bin/robot-dnd-all.sh  | 10 ++
 hadoop-ozone/acceptance-test/dev-support/bin/robot.sh |  7 ---
 hadoop-ozone/acceptance-test/pom.xml  |  7 +++
 .../src/test/acceptance/basic/ozone-shell.robot   |  1 -
 .../acceptance-test/src/test/acceptance/commonlib.robot   |  2 +-
 hadoop-ozone/common/pom.xml   |  5 +
 hadoop-ozone/docs/content/GettingStarted.md   |  3 ++-
 hadoop-ozone/pom.xml  |  5 +
 11 files changed, 24 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/692736f7/dev-support/bin/ozone-dist-layout-stitching
--
diff --git a/dev-support/bin/ozone-dist-layout-stitching 
b/dev-support/bin/ozone-dist-layout-stitching
index 2ba7791..1ba652c 100755
--- a/dev-support/bin/ozone-dist-layout-stitching
+++ b/dev-support/bin/ozone-dist-layout-stitching
@@ -117,9 +117,9 @@ ROOT=$(cd "${BASEDIR}"/../..;pwd)
 echo
 echo "Current directory $(pwd)"
 echo
-run rm -rf "ozone"
-run mkdir "ozone"
-run cd "ozone"
+run rm -rf "ozone-${HDDS_VERSION}"
+run mkdir "ozone-${HDDS_VERSION}"
+run cd "ozone-${HDDS_VERSION}"
 run cp -p "${ROOT}/LICENSE.txt" .
 run cp -p "${ROOT}/NOTICE.txt" .
 run cp -p "${ROOT}/README.txt" .

http://git-wip-us.apache.org/repos/asf/hadoop/blob/692736f7/dev-support/bin/ozone-dist-tar-stitching
--
diff --git a/dev-support/bin/ozone-dist-tar-stitching 
b/dev-support/bin/ozone-dist-tar-stitching
index d1116e4..93d0525 100755
--- a/dev-support/bin/ozone-dist-tar-stitching
+++ b/dev-support/bin/ozone-dist-tar-stitching
@@ -36,13 +36,8 @@ function run()
   fi
 }
 
-#To make the final dist directory easily mountable from docker we don't use
-#version name in the directory name.
-#To include the version name in the root directory of the tar file
-# we create a symbolic link and dereference it during the tar creation
-ln -s -f ozone ozone-${VERSION}
-run tar -c --dereference -f "ozone-${VERSION}.tar" "ozone-${VERSION}"
+run tar -c -f "ozone-${VERSION}.tar" "ozone-${VERSION}"
 run gzip -f "ozone-${VERSION}.tar"
 echo
 echo "Ozone dist tar available at: ${BASEDIR}/ozone-${VERSION}.tar.gz"
-echo
\ No newline at end of file
+echo

http://git-wip-us.apache.org/repos/asf/hadoop/blob/692736f7/hadoop-ozone/acceptance-test/dev-support/bin/robot-all.sh
--
diff --git a/hadoop-ozone/acceptance-test/dev-support/bin/robot-all.sh 
b/hadoop-ozone/acceptance-test/dev-support/bin/robot-all.sh
index ee9c6b8..87b7137 100755
--- a/hadoop-ozone/acceptance-test/dev-support/bin/robot-all.sh
+++ b/hadoop-ozone/acceptance-test/dev-support/bin/robot-all.sh
@@ -15,4 +15,4 @@
 # limitations under the License.
 
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-$DIR/robot.sh $DIR/../../src/test/acceptance
+"$DIR/robot.sh" "$DIR/../../src/test/acceptance"

http://git-wip-us.apache.org/repos/asf/hadoop/blob/692736f7/hadoop-ozone/acceptance-test/dev-support/bin/robot-dnd-all.sh
--
diff --git a/hadoop-ozone/acceptance-test/dev-support/bin/robot-dnd-all.sh 
b/hadoop-ozone/acceptance-test/dev-support/bin/robot-dnd-all.sh
index 9f1d367..052ffb3 100755
--- a/hadoop-ozone/acceptance-test/dev-support/bin/robot-dnd-all.sh
+++ b/hadoop-ozone/acceptance-test/dev-support/bin/robot-dnd-all.sh
@@ -18,15 +18,9 @@ set -x
 
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
 
-#Dir od the definition of the dind based test exeucution container
-DOCKERDIR="$DIR/../docker"
-
 #Dir to save the results
 TARGETDIR="$DIR/../../target/dnd"
 
-#Dir to mount the distribution from
-OZONEDIST="$DIR/../../../../hadoop-dist/target/ozone"
-
 #Name and imagename of the temporary, dind based test containers
 DOCKER_IMAGE_NAME=ozoneacceptance
 

[14/47] hadoop git commit: HADOOP-15699. Fix some of testContainerManager failures in Windows. Contributed by Botong Huang.

2018-08-31 Thread xkrogen
HADOOP-15699. Fix some of testContainerManager failures in Windows. Contributed 
by Botong Huang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/602d1384
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/602d1384
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/602d1384

Branch: refs/heads/HDFS-12943
Commit: 602d13844a8d4c7b08ce185da01fde098ff8b9a6
Parents: 05b2bbe
Author: Giovanni Matteo Fumarola 
Authored: Mon Aug 27 12:25:46 2018 -0700
Committer: Giovanni Matteo Fumarola 
Committed: Mon Aug 27 12:25:46 2018 -0700

--
 .../containermanager/TestContainerManager.java| 18 ++
 1 file changed, 6 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/602d1384/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
index ee5259f..d28340b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
@@ -320,9 +320,8 @@ public class TestContainerManager extends 
BaseContainerManagerTest {
 
   @Test (timeout = 1L)
   public void testAuxPathHandler() throws Exception {
-File testDir = GenericTestUtils.getTestDir(GenericTestUtils.getTestDir(
-TestContainerManager.class.getSimpleName() + "LocDir").
-getAbsolutePath());
+File testDir = GenericTestUtils
+.getTestDir(TestContainerManager.class.getSimpleName() + "LocDir");
 testDir.mkdirs();
 File testFile = new File(testDir, "test");
 testFile.createNewFile();
@@ -1977,15 +1976,11 @@ public class TestContainerManager extends 
BaseContainerManagerTest {
 Signal signal = ContainerLaunch.translateCommandToSignal(command);
 containerManager.start();
 
-File scriptFile = new File(tmpDir, "scriptFile.sh");
+File scriptFile = Shell.appendScriptExtension(tmpDir, "scriptFile");
 PrintWriter fileWriter = new PrintWriter(scriptFile);
 File processStartFile =
 new File(tmpDir, "start_file.txt").getAbsoluteFile();
-fileWriter.write("\numask 0"); // So that start file is readable by the 
test
-fileWriter.write("\necho Hello World! > " + processStartFile);
-fileWriter.write("\necho $$ >> " + processStartFile);
-fileWriter.write("\nexec sleep 1000s");
-fileWriter.close();
+writeScriptFile(fileWriter, "Hello world!", processStartFile, null, false);
 
 ContainerLaunchContext containerLaunchContext =
 recordFactory.newRecordInstance(ContainerLaunchContext.class);
@@ -2008,9 +2003,8 @@ public class TestContainerManager extends 
BaseContainerManagerTest {
 new HashMap();
 localResources.put(destinationFile, rsrc_alpha);
 containerLaunchContext.setLocalResources(localResources);
-List commands = new ArrayList<>();
-commands.add("/bin/bash");
-commands.add(scriptFile.getAbsolutePath());
+List commands =
+Arrays.asList(Shell.getRunScriptCommand(scriptFile));
 containerLaunchContext.setCommands(commands);
 StartContainerRequest scRequest =
 StartContainerRequest.newInstance(


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[30/47] hadoop git commit: HDDS-365. Implement flushStateMachineData for containerStateMachine. Contributed by Shashikant Banerjee.

2018-08-31 Thread xkrogen
HDDS-365. Implement flushStateMachineData for containerStateMachine. 
Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2651e2c4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2651e2c4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2651e2c4

Branch: refs/heads/HDFS-12943
Commit: 2651e2c43d0825912669a87afc256bad9f1ea6ed
Parents: 7ed458b
Author: Mukul Kumar Singh 
Authored: Wed Aug 29 07:57:57 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Wed Aug 29 07:58:30 2018 +0530

--
 .../apache/hadoop/hdds/scm/XceiverClientGrpc.java |  2 +-
 .../transport/server/XceiverServerGrpc.java   |  2 +-
 .../server/ratis/ContainerStateMachine.java   | 18 ++
 hadoop-project/pom.xml|  2 +-
 4 files changed, 21 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2651e2c4/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
index e2416c2..1622ddb 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
@@ -87,7 +87,7 @@ public class XceiverClientGrpc extends XceiverClientSpi {
 }
 LOG.debug("Connecting to server Port : " + leader.getIpAddress());
 channel = NettyChannelBuilder.forAddress(leader.getIpAddress(), port)
-.usePlaintext(true)
+.usePlaintext()
 .maxInboundMessageSize(OzoneConfigKeys.DFS_CONTAINER_CHUNK_MAX_SIZE)
 .build();
 asyncStub = XceiverClientProtocolServiceGrpc.newStub(channel);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2651e2c4/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
index f4f3f6f..4dc232d 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
@@ -77,7 +77,7 @@ public final class XceiverServerGrpc implements 
XceiverServerSpi {
 datanodeDetails.setPort(
 DatanodeDetails.newPort(DatanodeDetails.Port.Name.STANDALONE, port));
 server = ((NettyServerBuilder) ServerBuilder.forPort(port))
-.maxMessageSize(OzoneConfigKeys.DFS_CONTAINER_CHUNK_MAX_SIZE)
+.maxInboundMessageSize(OzoneConfigKeys.DFS_CONTAINER_CHUNK_MAX_SIZE)
 .addService(new GrpcXceiverService(dispatcher))
 .build();
 storageContainer = dispatcher;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2651e2c4/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
index ede87f4..68d6d5b 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
@@ -59,6 +59,7 @@ import java.util.List;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ThreadPoolExecutor;
+import java.util.stream.Collectors;
 
 /** A {@link org.apache.ratis.statemachine.StateMachine} for containers.
  *
@@ -316,6 +317,23 @@ public class ContainerStateMachine extends 
BaseStateMachine {
 return LogEntryProto.newBuilder().setSmLogEntry(log).build();
   }
 
+  /**
+   * Returns the combined future of all the writeChunks till the given log
+   * index. The Raft log worker will wait for the stateMachineData to complete
+   * flush as well.
+   *
+   * @param index log 

[13/47] hadoop git commit: YARN-8675. Remove default hostname for docker containers when net=host. Contributed by Suma Shivaprasad

2018-08-31 Thread xkrogen
YARN-8675. Remove default hostname for docker containers when net=host. 
Contributed by Suma Shivaprasad


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/05b2bbeb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/05b2bbeb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/05b2bbeb

Branch: refs/heads/HDFS-12943
Commit: 05b2bbeb357d4fa03e71f2bfd5d8eeb0ea6c3f60
Parents: c9b6395
Author: Billie Rinaldi 
Authored: Mon Aug 27 11:34:33 2018 -0700
Committer: Billie Rinaldi 
Committed: Mon Aug 27 11:34:33 2018 -0700

--
 .../runtime/DockerLinuxContainerRuntime.java| 49 
 1 file changed, 29 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/05b2bbeb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
index 1872830..00771ff 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
@@ -134,8 +134,8 @@ import static 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.r
  *   
  * {@code YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_HOSTNAME} sets the
  * hostname to be used by the Docker container. If not specified, a
- * hostname will be derived from the container ID.  This variable is
- * ignored if the network is 'host' and Registry DNS is not enabled.
+ * hostname will be derived from the container ID and set as default
+ * hostname for networks other than 'host'.
  *   
  *   
  * {@code YARN_CONTAINER_RUNTIME_DOCKER_RUN_PRIVILEGED_CONTAINER}
@@ -549,22 +549,34 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
 }
   }
 
-  /** Set a DNS friendly hostname. */
-  private void setHostname(DockerRunCommand runCommand, String
-  containerIdStr, String name)
+  /** Set a DNS friendly hostname.
+   *  Only add hostname if network is not host or if hostname is
+   *  specified via YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_HOSTNAME
+   *  in host network mode
+   */
+  private void setHostname(DockerRunCommand runCommand,
+  String containerIdStr, String network, String name)
   throws ContainerExecutionException {
-if (name == null || name.isEmpty()) {
-  name = RegistryPathUtils.encodeYarnID(containerIdStr);
 
-  String domain = conf.get(RegistryConstants.KEY_DNS_DOMAIN);
-  if (domain != null) {
-name += ("." + domain);
+if (network.equalsIgnoreCase("host")) {
+  if (name != null && !name.isEmpty()) {
+LOG.info("setting hostname in container to: " + name);
+runCommand.setHostname(name);
   }
-  validateHostname(name);
-}
+} else {
+  //get default hostname
+  if (name == null || name.isEmpty()) {
+name = RegistryPathUtils.encodeYarnID(containerIdStr);
 
-LOG.info("setting hostname in container to: " + name);
-runCommand.setHostname(name);
+String domain = conf.get(RegistryConstants.KEY_DNS_DOMAIN);
+if (domain != null) {
+  name += ("." + domain);
+}
+validateHostname(name);
+  }
+  LOG.info("setting hostname in container to: " + name);
+  runCommand.setHostname(name);
+}
   }
 
   /**
@@ -823,12 +835,9 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
 DockerRunCommand runCommand = new DockerRunCommand(containerIdStr,
 dockerRunAsUser, imageName)
 .setNetworkType(network);
-// Only add hostname if network is not host or if Registry DNS is enabled.
-if (!network.equalsIgnoreCase("host") ||
-conf.getBoolean(RegistryConstants.KEY_DNS_ENABLED,
-RegistryConstants.DEFAULT_DNS_ENABLED)) {
-  setHostname(runCommand, containerIdStr, hostname);
-}
+
+setHostname(runCommand, containerIdStr, network, hostname);
+
 

[40/47] hadoop git commit: HADOOP-15667. FileSystemMultipartUploader should verify that UploadHandle has non-0 length. Contributed by Ewan Higgs

2018-08-31 Thread xkrogen
HADOOP-15667. FileSystemMultipartUploader should verify that UploadHandle has 
non-0 length.
Contributed by Ewan Higgs


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2e6c1109
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2e6c1109
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2e6c1109

Branch: refs/heads/HDFS-12943
Commit: 2e6c1109dcdeedb59a3345047e9201271c9a0b27
Parents: 781437c
Author: Steve Loughran 
Authored: Thu Aug 30 14:33:16 2018 +0100
Committer: Steve Loughran 
Committed: Thu Aug 30 14:33:16 2018 +0100

--
 .../hadoop/fs/FileSystemMultipartUploader.java  |  6 ++-
 .../org/apache/hadoop/fs/MultipartUploader.java | 11 +
 .../AbstractContractMultipartUploaderTest.java  | 43 
 .../hadoop/fs/s3a/S3AMultipartUploader.java | 10 ++---
 4 files changed, 61 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e6c1109/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemMultipartUploader.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemMultipartUploader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemMultipartUploader.java
index a700a9f..f13b50b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemMultipartUploader.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemMultipartUploader.java
@@ -68,6 +68,7 @@ public class FileSystemMultipartUploader extends 
MultipartUploader {
   throws IOException {
 
 byte[] uploadIdByteArray = uploadId.toByteArray();
+checkUploadId(uploadIdByteArray);
 Path collectorPath = new Path(new String(uploadIdByteArray, 0,
 uploadIdByteArray.length, Charsets.UTF_8));
 Path partPath =
@@ -101,6 +102,8 @@ public class FileSystemMultipartUploader extends 
MultipartUploader {
   List> handles, UploadHandle multipartUploadId)
   throws IOException {
 
+checkUploadId(multipartUploadId.toByteArray());
+
 if (handles.isEmpty()) {
   throw new IOException("Empty upload");
 }
@@ -133,8 +136,7 @@ public class FileSystemMultipartUploader extends 
MultipartUploader {
   @Override
   public void abort(Path filePath, UploadHandle uploadId) throws IOException {
 byte[] uploadIdByteArray = uploadId.toByteArray();
-Preconditions.checkArgument(uploadIdByteArray.length != 0,
-"UploadId is empty");
+checkUploadId(uploadIdByteArray);
 Path collectorPath = new Path(new String(uploadIdByteArray, 0,
 uploadIdByteArray.length, Charsets.UTF_8));
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e6c1109/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MultipartUploader.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MultipartUploader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MultipartUploader.java
index 47fd9f2..76f58d3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MultipartUploader.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MultipartUploader.java
@@ -21,6 +21,7 @@ import java.io.IOException;
 import java.io.InputStream;
 import java.util.List;
 
+import com.google.common.base.Preconditions;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -91,4 +92,14 @@ public abstract class MultipartUploader {
   public abstract void abort(Path filePath, UploadHandle multipartUploadId)
   throws IOException;
 
+  /**
+   * Utility method to validate uploadIDs
+   * @param uploadId
+   * @throws IllegalArgumentException
+   */
+  protected void checkUploadId(byte[] uploadId)
+  throws IllegalArgumentException {
+Preconditions.checkArgument(uploadId.length > 0,
+"Empty UploadId is not valid");
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e6c1109/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMultipartUploaderTest.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMultipartUploaderTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMultipartUploaderTest.java
index c0e1600..85a6861 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMultipartUploaderTest.java
+++ 

[31/47] hadoop git commit: HDFS-13854. RBF: The ProcessingAvgTime and ProxyAvgTime should display by JMX with ms unit. Contributed by yanghuafeng.

2018-08-31 Thread xkrogen
HDFS-13854. RBF: The ProcessingAvgTime and ProxyAvgTime should display by JMX 
with ms unit. Contributed by yanghuafeng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/64ad0298
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/64ad0298
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/64ad0298

Branch: refs/heads/HDFS-12943
Commit: 64ad0298d441559951bc9589a40f8aab17c93a5f
Parents: 2651e2c
Author: Brahma Reddy Battula 
Authored: Wed Aug 29 08:29:50 2018 +0530
Committer: Brahma Reddy Battula 
Committed: Wed Aug 29 08:29:50 2018 +0530

--
 .../federation/metrics/FederationRPCMetrics.java | 13 ++---
 .../metrics/FederationRPCPerformanceMonitor.java | 15 +--
 2 files changed, 7 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/64ad0298/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
index 9ab4e5a..cce4b86 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
@@ -86,15 +86,6 @@ public class FederationRPCMetrics implements 
FederationRPCMBean {
   }
 
   /**
-   * Convert nanoseconds to milliseconds.
-   * @param ns Time in nanoseconds.
-   * @return Time in milliseconds.
-   */
-  private static double toMs(double ns) {
-return ns / 100;
-  }
-
-  /**
* Reset the metrics system.
*/
   public static void reset() {
@@ -230,7 +221,7 @@ public class FederationRPCMetrics implements 
FederationRPCMBean {
 
   @Override
   public double getProxyAvg() {
-return toMs(proxy.lastStat().mean());
+return proxy.lastStat().mean();
   }
 
   @Override
@@ -250,7 +241,7 @@ public class FederationRPCMetrics implements 
FederationRPCMBean {
 
   @Override
   public double getProcessingAvg() {
-return toMs(processing.lastStat().mean());
+return processing.lastStat().mean();
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/64ad0298/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
index 2c2741e..15725d1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
@@ -35,6 +35,8 @@ import org.slf4j.LoggerFactory;
 
 import com.google.common.util.concurrent.ThreadFactoryBuilder;
 
+import static org.apache.hadoop.util.Time.monotonicNow;
+
 /**
  * Customizable RPC performance monitor. Receives events from the RPC server
  * and aggregates them via JMX.
@@ -120,12 +122,12 @@ public class FederationRPCPerformanceMonitor implements 
RouterRpcMonitor {
 
   @Override
   public void startOp() {
-START_TIME.set(this.getNow());
+START_TIME.set(monotonicNow());
   }
 
   @Override
   public long proxyOp() {
-PROXY_TIME.set(this.getNow());
+PROXY_TIME.set(monotonicNow());
 long processingTime = getProcessingTime();
 if (processingTime >= 0) {
   metrics.addProcessingTime(processingTime);
@@ -188,13 +190,6 @@ public class FederationRPCPerformanceMonitor implements 
RouterRpcMonitor {
 metrics.incrRouterFailureLocked();
   }
 
-  /**
-   * Get current time.
-   * @return Current time in nanoseconds.
-   */
-  private long getNow() {
-return System.nanoTime();
-  }
 
   /**
* Get time between we receiving the operation and sending it to the 
Namenode.
@@ -214,7 +209,7 @@ public class FederationRPCPerformanceMonitor implements 
RouterRpcMonitor {
*/
   private long getProxyTime() {
 if (PROXY_TIME.get() != null && PROXY_TIME.get() > 0) {
-  return getNow() - PROXY_TIME.get();
+  return monotonicNow() - PROXY_TIME.get();
 }
 return -1;
   }



[34/47] hadoop git commit: HDFS-13634. RBF: Configurable value in xml for async connection request queue size. Contributed by CR Hota.

2018-08-31 Thread xkrogen
HDFS-13634. RBF: Configurable value in xml for async connection request queue 
size. Contributed by CR Hota.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a0ebb6b3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a0ebb6b3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a0ebb6b3

Branch: refs/heads/HDFS-12943
Commit: a0ebb6b39f2932d3ea2fb5e287f52b841e108428
Parents: 0bd4217
Author: Yiqun Lin 
Authored: Wed Aug 29 16:15:22 2018 +0800
Committer: Yiqun Lin 
Committed: Wed Aug 29 16:15:22 2018 +0800

--
 .../federation/router/ConnectionManager.java  | 18 +++---
 .../server/federation/router/RBFConfigKeys.java   |  5 +
 .../src/main/resources/hdfs-rbf-default.xml   |  8 
 3 files changed, 24 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0ebb6b3/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index 0b50845..9fb83e4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -49,9 +49,6 @@ public class ConnectionManager {
   private static final Logger LOG =
   LoggerFactory.getLogger(ConnectionManager.class);
 
-  /** Number of parallel new connections to create. */
-  protected static final int MAX_NEW_CONNECTIONS = 100;
-
   /** Minimum amount of active connections: 50%. */
   protected static final float MIN_ACTIVE_RATIO = 0.5f;
 
@@ -77,8 +74,10 @@ public class ConnectionManager {
   private final Lock writeLock = readWriteLock.writeLock();
 
   /** Queue for creating new connections. */
-  private final BlockingQueue creatorQueue =
-  new ArrayBlockingQueue<>(MAX_NEW_CONNECTIONS);
+  private final BlockingQueue creatorQueue;
+  /** Max size of queue for creating new connections. */
+  private final int creatorQueueMaxSize;
+
   /** Create new connections asynchronously. */
   private final ConnectionCreator creator;
   /** Periodic executor to remove stale connection pools. */
@@ -106,7 +105,12 @@ public class ConnectionManager {
 this.pools = new HashMap<>();
 
 // Create connections in a thread asynchronously
-this.creator = new ConnectionCreator(creatorQueue);
+this.creatorQueueMaxSize = this.conf.getInt(
+RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_CREATOR_QUEUE_SIZE,
+RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_CREATOR_QUEUE_SIZE_DEFAULT
+);
+this.creatorQueue = new ArrayBlockingQueue<>(this.creatorQueueMaxSize);
+this.creator = new ConnectionCreator(this.creatorQueue);
 this.creator.setDaemon(true);
 
 // Cleanup periods
@@ -213,7 +217,7 @@ public class ConnectionManager {
 if (conn == null || !conn.isUsable()) {
   if (!this.creatorQueue.offer(pool)) {
 LOG.error("Cannot add more than {} connections at the same time",
-MAX_NEW_CONNECTIONS);
+this.creatorQueueMaxSize);
   }
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0ebb6b3/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index 87df5d2..997e1dd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -93,6 +93,11 @@ public class RBFConfigKeys extends 
CommonConfigurationKeysPublic {
   TimeUnit.SECONDS.toMillis(5);
 
   // HDFS Router NN client
+  public static final String
+  DFS_ROUTER_NAMENODE_CONNECTION_CREATOR_QUEUE_SIZE =
+  FEDERATION_ROUTER_PREFIX + "connection.creator.queue-size";
+  public static final int
+  DFS_ROUTER_NAMENODE_CONNECTION_CREATOR_QUEUE_SIZE_DEFAULT = 100;
   public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE =
   FEDERATION_ROUTER_PREFIX + "connection.pool-size";
   public static final 

[21/47] hadoop git commit: HDDS-359. RocksDB Profiles support. Contributed by Anu Engineer.

2018-08-31 Thread xkrogen
HDDS-359. RocksDB Profiles support. Contributed by Anu Engineer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c61824a1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c61824a1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c61824a1

Branch: refs/heads/HDFS-12943
Commit: c61824a18940ef37dc7201717a3115a78bf942d4
Parents: df21e1b
Author: Márton Elek 
Authored: Tue Aug 28 19:22:30 2018 +0200
Committer: Márton Elek 
Committed: Tue Aug 28 19:33:13 2018 +0200

--
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |   6 +
 .../hadoop/utils/db/DBConfigFromFile.java   | 134 +
 .../org/apache/hadoop/utils/db/DBProfile.java   | 120 +++
 .../apache/hadoop/utils/db/DBStoreBuilder.java  | 201 +++
 .../org/apache/hadoop/utils/db/RDBStore.java|  32 +--
 .../org/apache/hadoop/utils/db/TableConfig.java |  93 +
 .../common/src/main/resources/ozone-default.xml |  10 +
 .../hadoop/utils/db/TestDBConfigFromFile.java   | 116 +++
 .../hadoop/utils/db/TestDBStoreBuilder.java | 174 
 .../apache/hadoop/utils/db/TestRDBStore.java|  17 +-
 .../hadoop/utils/db/TestRDBTableStore.java  |  11 +-
 .../common/src/test/resources/test.db.ini   | 145 +
 hadoop-hdds/pom.xml |   1 +
 13 files changed, 1040 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c61824a1/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
index d25af80..8272ed7 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hdds;
 
+import org.apache.hadoop.utils.db.DBProfile;
+
 /**
  * This class contains constants for configuration keys and default values
  * used in hdds.
@@ -58,4 +60,8 @@ public final class HddsConfigKeys {
   public static final String HDDS_DATANODE_VOLUME_CHOOSING_POLICY =
   "hdds.datanode.volume.choosing.policy";
 
+  // DB Profiles used by ROCKDB instances.
+  public static final String HDDS_DB_PROFILE = "hdds.db.profile";
+  public static final DBProfile HDDS_DEFAULT_DB_PROFILE = DBProfile.SSD;
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c61824a1/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBConfigFromFile.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBConfigFromFile.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBConfigFromFile.java
new file mode 100644
index 000..753a460
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBConfigFromFile.java
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db;
+
+import com.google.common.base.Preconditions;
+import org.eclipse.jetty.util.StringUtil;
+import org.rocksdb.ColumnFamilyDescriptor;
+import org.rocksdb.DBOptions;
+import org.rocksdb.Env;
+import org.rocksdb.OptionsUtil;
+import org.rocksdb.RocksDBException;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.List;
+
+/**
+ * A Class that controls the standard config options of RocksDB.
+ * 
+ * Important : Some of the functions in this file are magic functions designed
+ * for the use of OZONE developers only. Due to that this information is
+ * documented in this files only and is *not* intended for end user 
consumption.
+ * Please do not use this information to tune your production environments.
+ * Please remember the SpiderMan principal; with great 

[24/47] hadoop git commit: HDDS-376. Create custom message structure for use in AuditLogging Contributed by Dinesh Chitlangia.

2018-08-31 Thread xkrogen
HDDS-376. Create custom message structure for use in AuditLogging
Contributed by Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ac515d22
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ac515d22
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ac515d22

Branch: refs/heads/HDFS-12943
Commit: ac515d22d84478acbed92ef4024d9a3d3f329c8a
Parents: cb9d371
Author: Anu Engineer 
Authored: Tue Aug 28 12:59:08 2018 -0700
Committer: Anu Engineer 
Committed: Tue Aug 28 12:59:08 2018 -0700

--
 .../apache/hadoop/ozone/audit/AuditLogger.java  |  66 --
 .../apache/hadoop/ozone/audit/AuditMessage.java |  64 ++
 .../apache/hadoop/ozone/audit/package-info.java |  19 ++-
 .../ozone/audit/TestOzoneAuditLogger.java   | 124 ---
 4 files changed, 177 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac515d22/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/AuditLogger.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/AuditLogger.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/AuditLogger.java
index 46ffaab..ee20c66 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/AuditLogger.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/AuditLogger.java
@@ -21,10 +21,8 @@ import com.google.common.annotations.VisibleForTesting;
 import org.apache.logging.log4j.Level;
 import org.apache.logging.log4j.LogManager;
 import org.apache.logging.log4j.Marker;
-import org.apache.logging.log4j.message.StructuredDataMessage;
 import org.apache.logging.log4j.spi.ExtendedLogger;
 
-import java.util.Map;
 
 /**
  * Class to define Audit Logger for Ozone.
@@ -32,16 +30,13 @@ import java.util.Map;
 public class AuditLogger {
 
   private ExtendedLogger logger;
-
-  private static final String SUCCESS = AuditEventStatus.SUCCESS.getStatus();
-  private static final String FAILURE = AuditEventStatus.FAILURE.getStatus();
   private static final String FQCN = AuditLogger.class.getName();
   private static final Marker WRITE_MARKER = AuditMarker.WRITE.getMarker();
   private static final Marker READ_MARKER = AuditMarker.READ.getMarker();
 
   /**
* Parametrized Constructor to initialize logger.
-   * @param type
+   * @param type Audit Logger Type
*/
   public AuditLogger(AuditLoggerType type){
 initializeLogger(type);
@@ -60,68 +55,53 @@ public class AuditLogger {
 return logger;
   }
 
-  public void logWriteSuccess(AuditAction type, Map data) {
-logWriteSuccess(type, data, Level.INFO);
+  public void logWriteSuccess(AuditMessage msg) {
+logWriteSuccess(Level.INFO, msg);
   }
 
-  public void logWriteSuccess(AuditAction type, Map data, Level
-  level) {
-StructuredDataMessage msg = new StructuredDataMessage("", SUCCESS,
-type.getAction(), data);
+  public void logWriteSuccess(Level level, AuditMessage msg) {
 this.logger.logIfEnabled(FQCN, level, WRITE_MARKER, msg, null);
   }
 
-
-  public void logWriteFailure(AuditAction type, Map data) {
-logWriteFailure(type, data, Level.INFO, null);
+  public void logWriteFailure(AuditMessage msg) {
+logWriteFailure(Level.ERROR, msg);
   }
 
-  public void logWriteFailure(AuditAction type, Map data, Level
-  level) {
-logWriteFailure(type, data, level, null);
+  public void logWriteFailure(Level level, AuditMessage msg) {
+logWriteFailure(level, msg, null);
   }
 
-  public void logWriteFailure(AuditAction type, Map data,
-  Throwable exception) {
-logWriteFailure(type, data, Level.INFO, exception);
+  public void logWriteFailure(AuditMessage msg, Throwable exception) {
+logWriteFailure(Level.ERROR, msg, exception);
   }
 
-  public void logWriteFailure(AuditAction type, Map data, Level
-  level, Throwable exception) {
-StructuredDataMessage msg = new StructuredDataMessage("", FAILURE,
-type.getAction(), data);
+  public void logWriteFailure(Level level, AuditMessage msg,
+  Throwable exception) {
 this.logger.logIfEnabled(FQCN, level, WRITE_MARKER, msg, exception);
   }
 
-  public void logReadSuccess(AuditAction type, Map data) {
-logReadSuccess(type, data, Level.INFO);
+  public void logReadSuccess(AuditMessage msg) {
+logReadSuccess(Level.INFO, msg);
   }
 
-  public void logReadSuccess(AuditAction type, Map data, Level
-  level) {
-StructuredDataMessage msg = new StructuredDataMessage("", SUCCESS,
-type.getAction(), data);
+  public void logReadSuccess(Level level, AuditMessage msg) {
 this.logger.logIfEnabled(FQCN, level, READ_MARKER, msg, null);
   }
 
-  public void 

[28/47] hadoop git commit: HDFS-13731. ReencryptionUpdater fails with ConcurrentModificationException during processCheckpoints. Contributed by Zsolt Venczel.

2018-08-31 Thread xkrogen
HDFS-13731. ReencryptionUpdater fails with ConcurrentModificationException 
during processCheckpoints. Contributed by Zsolt Venczel.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3e18b957
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3e18b957
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3e18b957

Branch: refs/heads/HDFS-12943
Commit: 3e18b957ebdf20925224ab9c28e6c2f4b6bbdb24
Parents: c5629d5
Author: Zsolt Venczel 
Authored: Tue Aug 28 15:11:58 2018 -0700
Committer: Xiao Chen 
Committed: Tue Aug 28 15:13:43 2018 -0700

--
 .../server/namenode/ReencryptionHandler.java|  6 +--
 .../server/namenode/ReencryptionUpdater.java| 52 ++--
 2 files changed, 30 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e18b957/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
index c8c8d68..a8acccd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
@@ -714,10 +714,10 @@ public class ReencryptionHandler implements Runnable {
   zst = new ZoneSubmissionTracker();
   submissions.put(zoneId, zst);
 }
+Future future = batchService.submit(new EDEKReencryptCallable(zoneId,
+currentBatch, reencryptionHandler));
+zst.addTask(future);
   }
-  Future future = batchService.submit(new EDEKReencryptCallable(zoneId,
-  currentBatch, reencryptionHandler));
-  zst.addTask(future);
   LOG.info("Submitted batch (start:{}, size:{}) of zone {} to re-encrypt.",
   currentBatch.getFirstFilePath(), currentBatch.size(), zoneId);
   currentBatch = new ReencryptionBatch(reencryptBatchSize);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e18b957/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java
index a5923a7..15cfa92 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java
@@ -383,32 +383,34 @@ public final class ReencryptionUpdater implements 
Runnable {
 final LinkedList tasks = tracker.getTasks();
 final List xAttrs = Lists.newArrayListWithCapacity(1);
 ListIterator iter = tasks.listIterator();
-while (iter.hasNext()) {
-  Future curr = iter.next();
-  if (curr.isCancelled()) {
-break;
-  }
-  if (!curr.isDone() || !curr.get().processed) {
-// still has earlier tasks not completed, skip here.
-break;
-  }
-  ReencryptionTask task = curr.get();
-  LOG.debug("Updating re-encryption checkpoint with completed task."
-  + " last: {} size:{}.", task.lastFile, task.batch.size());
-  assert zoneId == task.zoneId;
-  try {
-final XAttr xattr = FSDirEncryptionZoneOp
-.updateReencryptionProgress(dir, zoneNode, status, task.lastFile,
-task.numFilesUpdated, task.numFailures);
-xAttrs.clear();
-xAttrs.add(xattr);
-  } catch (IOException ie) {
-LOG.warn("Failed to update re-encrypted progress to xattr for zone {}",
-zonePath, ie);
-++task.numFailures;
+synchronized (handler) {
+  while (iter.hasNext()) {
+Future curr = iter.next();
+if (curr.isCancelled()) {
+  break;
+}
+if (!curr.isDone() || !curr.get().processed) {
+  // still has earlier tasks not completed, skip here.
+  break;
+}
+ReencryptionTask task = curr.get();
+LOG.debug("Updating re-encryption checkpoint with completed task."
++ " last: {} size:{}.", task.lastFile, task.batch.size());
+assert zoneId == task.zoneId;
+try {
+  final XAttr xattr = FSDirEncryptionZoneOp
+  .updateReencryptionProgress(dir, zoneNode, status, 

[01/47] hadoop git commit: HDFS-13831. Make block increment deletion number configurable. Contributed by Ryan Wu.

2018-08-31 Thread xkrogen
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-12943 191faeb96 -> 039c158d2


HDFS-13831. Make block increment deletion number configurable. Contributed by 
Ryan Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b9b964d2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b9b964d2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b9b964d2

Branch: refs/heads/HDFS-12943
Commit: b9b964d25335943fb15cdfcf369d123bbd7e454a
Parents: a4121c7
Author: Yiqun Lin 
Authored: Mon Aug 27 14:55:46 2018 +0800
Committer: Yiqun Lin 
Committed: Mon Aug 27 14:55:46 2018 +0800

--
 .../main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java  |  5 +
 .../apache/hadoop/hdfs/server/namenode/FSNamesystem.java | 11 +--
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml  | 10 ++
 .../hdfs/server/namenode/TestLargeDirectoryDelete.java   |  2 +-
 4 files changed, 25 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b964d2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 5ed35b8..bd88341 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -395,6 +395,11 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   public static final String  
DFS_NAMENODE_STARTUP_DELAY_BLOCK_DELETION_SEC_KEY = 
"dfs.namenode.startup.delay.block.deletion.sec";
   public static final long
DFS_NAMENODE_STARTUP_DELAY_BLOCK_DELETION_SEC_DEFAULT = 0L;
 
+  /** Block deletion increment. */
+  public static final String DFS_NAMENODE_BLOCK_DELETION_INCREMENT_KEY =
+  "dfs.namenode.block.deletion.increment";
+  public static final int DFS_NAMENODE_BLOCK_DELETION_INCREMENT_DEFAULT = 1000;
+
   public static final String DFS_NAMENODE_SNAPSHOT_CAPTURE_OPENFILES =
   HdfsClientConfigKeys.DFS_NAMENODE_SNAPSHOT_CAPTURE_OPENFILES;
   public static final boolean DFS_NAMENODE_SNAPSHOT_CAPTURE_OPENFILES_DEFAULT =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b964d2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 06bf008..6ba0e0b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -428,12 +428,12 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   FSNamesystem.class.getName() + ".audit");
 
   private final int maxCorruptFileBlocksReturn;
-  static int BLOCK_DELETION_INCREMENT = 1000;
   private final boolean isPermissionEnabled;
   private final UserGroupInformation fsOwner;
   private final String supergroup;
   private final boolean standbyShouldCheckpoint;
   private final int snapshotDiffReportLimit;
+  private final int blockDeletionIncrement;
 
   /** Interval between each check of lease to release. */
   private final long leaseRecheckIntervalMs;
@@ -909,6 +909,13 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   DFSConfigKeys.DFS_NAMENODE_LIST_OPENFILES_NUM_RESPONSES +
   " must be a positive integer."
   );
+
+  this.blockDeletionIncrement = conf.getInt(
+  DFSConfigKeys.DFS_NAMENODE_BLOCK_DELETION_INCREMENT_KEY,
+  DFSConfigKeys.DFS_NAMENODE_BLOCK_DELETION_INCREMENT_DEFAULT);
+  Preconditions.checkArgument(blockDeletionIncrement > 0,
+  DFSConfigKeys.DFS_NAMENODE_BLOCK_DELETION_INCREMENT_KEY +
+  " must be a positive integer.");
 } catch(IOException e) {
   LOG.error(getClass().getSimpleName() + " initialization failed.", e);
   close();
@@ -3094,7 +3101,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 while (iter.hasNext()) {
   writeLock();
   try {
-for (int i = 0; i < BLOCK_DELETION_INCREMENT && iter.hasNext(); i++) {
+for (int i = 0; i < blockDeletionIncrement && iter.hasNext(); i++) {
   blockManager.removeBlock(iter.next());
 }
   } finally {


[12/47] hadoop git commit: HDSS-375. ContainerReportHandler should not send replication events for open containers. Contributed by Ajay Kumar.

2018-08-31 Thread xkrogen
HDSS-375. ContainerReportHandler should not send replication events for open 
containers. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c9b63956
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c9b63956
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c9b63956

Branch: refs/heads/HDFS-12943
Commit: c9b63956d97521ec21a051bfcbbf4b79262ea16f
Parents: f152582
Author: Xiaoyu Yao 
Authored: Mon Aug 27 10:39:30 2018 -0700
Committer: Xiaoyu Yao 
Committed: Mon Aug 27 10:40:33 2018 -0700

--
 .../scm/container/ContainerReportHandler.java   |  4 ++
 .../container/TestContainerReportHandler.java   | 40 +++-
 2 files changed, 34 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c9b63956/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
index 5a9e726..5ca2bcb 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
@@ -129,6 +129,10 @@ public class ContainerReportHandler implements
   "Container is missing from containerStateManager. Can't request "
   + "replication. {}",
   containerID);
+  return;
+}
+if (container.isContainerOpen()) {
+  return;
 }
 if (replicationStatus.isReplicationEnabled()) {
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c9b63956/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java
index e7b6cd9..443b4b2 100644
--- 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java
+++ 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java
@@ -84,6 +84,7 @@ public class TestContainerReportHandler implements 
EventPublisher {
 new Builder()
 .setReplicationFactor(ReplicationFactor.THREE)
 .setContainerID((Long) invocation.getArguments()[0])
+.setState(LifeCycleState.CLOSED)
 .build()
 );
 
@@ -116,26 +117,45 @@ public class TestContainerReportHandler implements 
EventPublisher {
 when(pipelineSelector.getReplicationPipeline(ReplicationType.STAND_ALONE,
 ReplicationFactor.THREE)).thenReturn(pipeline);
 
-long c1 = containerStateManager
+ContainerInfo cont1 = containerStateManager
 .allocateContainer(pipelineSelector, ReplicationType.STAND_ALONE,
-ReplicationFactor.THREE, "root").getContainerInfo()
-.getContainerID();
-
-long c2 = containerStateManager
+ReplicationFactor.THREE, "root").getContainerInfo();
+ContainerInfo cont2 = containerStateManager
 .allocateContainer(pipelineSelector, ReplicationType.STAND_ALONE,
-ReplicationFactor.THREE, "root").getContainerInfo()
-.getContainerID();
-
+ReplicationFactor.THREE, "root").getContainerInfo();
+// Open Container
+ContainerInfo cont3 = containerStateManager
+.allocateContainer(pipelineSelector, ReplicationType.STAND_ALONE,
+ReplicationFactor.THREE, "root").getContainerInfo();
+
+long c1 = cont1.getContainerID();
+long c2 = cont2.getContainerID();
+long c3 = cont3.getContainerID();
+
+// Close remaining containers
+try {
+  containerStateManager.getContainerStateMap()
+  .updateState(cont1, cont1.getState(), LifeCycleState.CLOSING);
+  containerStateManager.getContainerStateMap()
+  .updateState(cont1, cont1.getState(), LifeCycleState.CLOSED);
+  containerStateManager.getContainerStateMap()
+  .updateState(cont2, cont2.getState(), LifeCycleState.CLOSING);
+  containerStateManager.getContainerStateMap()
+  .updateState(cont2, cont2.getState(), LifeCycleState.CLOSED);
+
+} catch (IOException e) {
+  LOG.info("Failed to change state of open containers.", e);
+}
 //when
 
 //initial reports before replication is enabled. 2 

[47/47] hadoop git commit: HDFS-13779. [SBN read] Implement proper failover and observer failure handling logic for for ObserverReadProxyProvider. Contributed by Erik Krogen.

2018-08-31 Thread xkrogen
HDFS-13779. [SBN read] Implement proper failover and observer failure handling 
logic for for ObserverReadProxyProvider. Contributed by Erik Krogen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/039c158d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/039c158d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/039c158d

Branch: refs/heads/HDFS-12943
Commit: 039c158d2c8a45906e6ea5f9661391bc541ab0cb
Parents: 5320173
Author: Erik Krogen 
Authored: Fri Aug 24 05:04:27 2018 -0700
Committer: Erik Krogen 
Committed: Fri Aug 31 09:09:59 2018 -0700

--
 .../ha/AbstractNNFailoverProxyProvider.java |  16 +
 .../namenode/ha/ObserverReadProxyProvider.java  | 255 --
 .../server/namenode/ha/TestObserverNode.java|  27 +-
 .../ha/TestObserverReadProxyProvider.java   | 335 +++
 4 files changed, 532 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/039c158d/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
index 252b70d..32edb36 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
@@ -30,6 +30,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
 import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.apache.hadoop.hdfs.HAUtilClient;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -111,6 +112,12 @@ public abstract class AbstractNNFailoverProxyProvider 
implements
*/
   public static class NNProxyInfo extends ProxyInfo {
 private InetSocketAddress address;
+/**
+ * The currently known state of the NameNode represented by this ProxyInfo.
+ * This may be out of date if the NameNode has changed state since the last
+ * time the state was checked.
+ */
+private HAServiceState cachedState;
 
 public NNProxyInfo(InetSocketAddress address) {
   super(null, address.toString());
@@ -120,6 +127,15 @@ public abstract class AbstractNNFailoverProxyProvider 
implements
 public InetSocketAddress getAddress() {
   return address;
 }
+
+public void setCachedState(HAServiceState state) {
+  cachedState = state;
+}
+
+public HAServiceState getCachedState() {
+  return cachedState;
+}
+
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/039c158d/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
index dcae2db..e819282 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
@@ -20,18 +20,24 @@ package org.apache.hadoop.hdfs.server.namenode.ha;
 import java.io.Closeable;
 import java.io.IOException;
 import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.InvocationTargetException;
 import java.lang.reflect.Method;
 import java.lang.reflect.Proxy;
 import java.net.URI;
-import java.util.ArrayList;
 import java.util.List;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
 import org.apache.hadoop.hdfs.ClientGSIContext;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.io.retry.AtMostOnce;
+import org.apache.hadoop.io.retry.Idempotent;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import 

[22/47] hadoop git commit: YARN-8488. Added SUCCEEDED/FAILED states to YARN service. Contributed by Suma Shivaprasad

2018-08-31 Thread xkrogen
YARN-8488.  Added SUCCEEDED/FAILED states to YARN service.
Contributed by Suma Shivaprasad


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fd089caf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fd089caf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fd089caf

Branch: refs/heads/HDFS-12943
Commit: fd089caf69cf608a91564c9c3d20cbf84e7fd60c
Parents: c61824a
Author: Eric Yang 
Authored: Tue Aug 28 13:55:28 2018 -0400
Committer: Eric Yang 
Committed: Tue Aug 28 13:55:28 2018 -0400

--
 .../hadoop/yarn/service/ServiceScheduler.java   | 100 ++---
 .../service/api/records/ComponentState.java |   2 +-
 .../service/api/records/ContainerState.java |   3 +-
 .../yarn/service/api/records/ServiceState.java  |   2 +-
 .../component/instance/ComponentInstance.java   | 144 ++-
 .../timelineservice/ServiceTimelineEvent.java   |   5 +-
 .../ServiceTimelinePublisher.java   |  33 -
 .../yarn/service/MockRunningServiceContext.java |  18 ++-
 .../hadoop/yarn/service/ServiceTestUtils.java   |   9 +-
 .../yarn/service/component/TestComponent.java   |  55 ++-
 .../component/TestComponentRestartPolicy.java   |   1 -
 .../instance/TestComponentInstance.java |  35 ++---
 .../TestServiceTimelinePublisher.java   |   4 +-
 13 files changed, 322 insertions(+), 89 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fd089caf/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
index 384659f..b49ef2a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
@@ -59,6 +59,7 @@ import org.apache.hadoop.yarn.event.EventHandler;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.service.api.ServiceApiConstants;
+import org.apache.hadoop.yarn.service.api.records.ContainerState;
 import org.apache.hadoop.yarn.service.api.records.Service;
 import org.apache.hadoop.yarn.service.api.records.ServiceState;
 import org.apache.hadoop.yarn.service.api.records.ConfigFile;
@@ -80,6 +81,8 @@ import org.apache.hadoop.yarn.service.utils.ServiceApiUtil;
 import org.apache.hadoop.yarn.service.utils.ServiceRegistryUtils;
 import org.apache.hadoop.yarn.service.utils.ServiceUtils;
 import org.apache.hadoop.yarn.util.BoundedAppender;
+import org.apache.hadoop.yarn.util.Clock;
+import org.apache.hadoop.yarn.util.SystemClock;
 import org.apache.hadoop.yarn.util.resource.ResourceUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -102,7 +105,8 @@ import java.util.concurrent.TimeUnit;
 
 import static org.apache.hadoop.fs.FileSystem.FS_DEFAULT_NAME_KEY;
 import static org.apache.hadoop.registry.client.api.RegistryConstants.*;
-import static 
org.apache.hadoop.yarn.api.records.ContainerExitStatus.KILLED_AFTER_APP_COMPLETION;
+import static org.apache.hadoop.yarn.api.records.ContainerExitStatus
+.KILLED_AFTER_APP_COMPLETION;
 import static org.apache.hadoop.yarn.service.api.ServiceApiConstants.*;
 import static org.apache.hadoop.yarn.service.component.ComponentEventType.*;
 import static org.apache.hadoop.yarn.service.exceptions.LauncherExitCodes
@@ -137,6 +141,8 @@ public class ServiceScheduler extends CompositeService {
 
   private ServiceTimelinePublisher serviceTimelinePublisher;
 
+  private boolean timelineServiceEnabled;
+
   // Global diagnostics that will be reported to RM on eRxit.
   // The unit the number of characters. This will be limited to 64 * 1024
   // characters.
@@ -169,6 +175,8 @@ public class ServiceScheduler extends CompositeService {
   private volatile FinalApplicationStatus finalApplicationStatus =
   FinalApplicationStatus.ENDED;
 
+  private Clock systemClock;
+
   // For unit test override since we don't want to terminate UT process.
   private ServiceUtils.ProcessTerminationHandler
   terminationHandler = new ServiceUtils.ProcessTerminationHandler();
@@ 

[38/47] hadoop git commit: HDFS-13863. FsDatasetImpl should log DiskOutOfSpaceException. Contributed by Fei Hui.

2018-08-31 Thread xkrogen
HDFS-13863. FsDatasetImpl should log DiskOutOfSpaceException. Contributed by 
Fei Hui.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/582cb10e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/582cb10e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/582cb10e

Branch: refs/heads/HDFS-12943
Commit: 582cb10ec74ed5666946a3769002ceb80ba660cb
Parents: d53a10b
Author: Yiqun Lin 
Authored: Thu Aug 30 11:21:13 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Aug 30 11:21:13 2018 +0800

--
 .../hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/582cb10e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index d7f133e..27196c2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -1397,6 +1397,9 @@ class FsDatasetImpl implements FsDatasetSpi 
{
   datanode.getMetrics().incrRamDiskBlocksWrite();
 } catch (DiskOutOfSpaceException de) {
   // Ignore the exception since we just fall back to persistent 
storage.
+  LOG.warn("Insufficient space for placing the block on a transient "
+  + "volume, fall back to persistent storage: "
+  + de.getMessage());
 } finally {
   if (ref == null) {
 cacheManager.release(b.getNumBytes());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[19/47] hadoop git commit: HDDS-332. Remove the ability to configure ozone.handler.type Contributed by Nandakumar and Anu Engineer.

2018-08-31 Thread xkrogen
http://git-wip-us.apache.org/repos/asf/hadoop/blob/df21e1b1/hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/hdfs/server/datanode/ObjectStoreHandler.java
--
diff --git 
a/hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/hdfs/server/datanode/ObjectStoreHandler.java
 
b/hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/hdfs/server/datanode/ObjectStoreHandler.java
index 2200cd8..f56cbe8 100644
--- 
a/hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/hdfs/server/datanode/ObjectStoreHandler.java
+++ 
b/hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/hdfs/server/datanode/ObjectStoreHandler.java
@@ -1,64 +1,58 @@
 /**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
  * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
  */
 package org.apache.hadoop.hdfs.server.datanode;
 
-import static org.apache.hadoop.hdds.HddsUtils.getScmAddressForBlockClients;
-import static org.apache.hadoop.hdds.HddsUtils.getScmAddressForClients;
-import static org.apache.hadoop.ozone.OmUtils.getOmAddress;
-import static org.apache.hadoop.ozone.OzoneConfigKeys.*;
-import static 
com.sun.jersey.api.core.ResourceConfig.PROPERTY_CONTAINER_REQUEST_FILTERS;
-import static com.sun.jersey.api.core.ResourceConfig.FEATURE_TRACE;
-
-import java.io.Closeable;
-import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.util.HashMap;
-import java.util.Map;
-
 import com.sun.jersey.api.container.ContainerFactory;
 import com.sun.jersey.api.core.ApplicationAdapter;
-
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB;
+import org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolPB;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolPB;
 import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.ipc.Client;
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.net.NetUtils;
 import 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
 import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolPB;
-import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.web.ObjectStoreApplication;
 import org.apache.hadoop.ozone.web.handlers.ServiceFilter;
+import org.apache.hadoop.ozone.web.interfaces.StorageHandler;
 import org.apache.hadoop.ozone.web.netty.ObjectStoreJerseyContainer;
-import org.apache.hadoop.hdds.scm.protocolPB
-.ScmBlockLocationProtocolClientSideTranslatorPB;
-import org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolPB;
+import org.apache.hadoop.ozone.web.storage.DistributedStorageHandler;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.ipc.Client;
-import org.apache.hadoop.ipc.ProtobufRpcEngine;
-import org.apache.hadoop.ipc.RPC;
-import org.apache.hadoop.net.NetUtils;
-import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.hdds.scm.protocolPB
-.StorageContainerLocationProtocolClientSideTranslatorPB;
-import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolPB;
-import 

[37/47] hadoop git commit: HADOOP-15705. Typo in the definition of "stable" in the interface classification

2018-08-31 Thread xkrogen
HADOOP-15705. Typo in the definition of "stable" in the interface classification

Change-Id: I3eae2143400a534903db4f186400561fc8d2bd56


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d53a10b0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d53a10b0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d53a10b0

Branch: refs/heads/HDFS-12943
Commit: d53a10b0a552155de700e396fd7f450a4c5f9c22
Parents: 692736f
Author: Daniel Templeton 
Authored: Wed Aug 29 13:59:32 2018 -0700
Committer: Daniel Templeton 
Committed: Wed Aug 29 13:59:32 2018 -0700

--
 .../hadoop-common/src/site/markdown/InterfaceClassification.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d53a10b0/hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
index a21e28b..7348044 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
@@ -124,7 +124,7 @@ hence serves as a safe development target. A Stable 
interface may evolve
 compatibly between minor releases.
 
 Incompatible changes allowed: major (X.0.0)
-Compatible changes allowed: maintenance (x.Y.0)
+Compatible changes allowed: maintenance (x.y.Z)
 
  Evolving
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[09/47] hadoop git commit: YARN-8719. Typo correction for yarn configuration in OpportunisticContainers(federation) docs. Contributed by Y. SREENIVASULU REDDY.

2018-08-31 Thread xkrogen
YARN-8719. Typo correction for yarn configuration in 
OpportunisticContainers(federation) docs. Contributed by Y. SREENIVASULU REDDY.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e8b063f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e8b063f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e8b063f6

Branch: refs/heads/HDFS-12943
Commit: e8b063f63049d781f4bd67e2ac928c03fd7b7941
Parents: f9c6fd9
Author: Weiwei Yang 
Authored: Tue Aug 28 01:02:51 2018 +0800
Committer: Weiwei Yang 
Committed: Tue Aug 28 01:03:03 2018 +0800

--
 .../src/site/markdown/OpportunisticContainers.md.vm| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e8b063f6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/OpportunisticContainers.md.vm
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/OpportunisticContainers.md.vm
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/OpportunisticContainers.md.vm
index f1c75ae..272c932 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/OpportunisticContainers.md.vm
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/OpportunisticContainers.md.vm
@@ -60,7 +60,7 @@ In order to submit jobs to a cluster that has AMRMProxy 
turned on, one must crea
 
 | Property | Value | Description |
 |: |:- |:- |
-| `yarn.resourcemanger.scheduler.address` | `localhost:8049` | Redirects jobs 
to the Node Manager's AMRMProxy port.|
+| `yarn.resourcemanager.scheduler.address` | `localhost:8049` | Redirects jobs 
to the Node Manager's AMRMProxy port.|
 
 
 $H3 Running a Sample Job


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[18/47] hadoop git commit: HDDS-381. Fix TestKeys#testPutAndGetKeyWithDnRestart. Contributed by Mukul Kumar Singh.

2018-08-31 Thread xkrogen
HDDS-381. Fix TestKeys#testPutAndGetKeyWithDnRestart. Contributed by Mukul 
Kumar Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2172399c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2172399c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2172399c

Branch: refs/heads/HDFS-12943
Commit: 2172399c55b481ea0da8cf2e2cb91ea6d8140b27
Parents: 75691ad
Author: Nanda kumar 
Authored: Tue Aug 28 22:19:52 2018 +0530
Committer: Nanda kumar 
Committed: Tue Aug 28 22:19:52 2018 +0530

--
 .../common/transport/server/GrpcXceiverService.java|  8 +++-
 .../java/org/apache/hadoop/ozone/MiniOzoneCluster.java |  3 ++-
 .../org/apache/hadoop/ozone/MiniOzoneClusterImpl.java  | 13 +++--
 .../statemachine/commandhandler/TestBlockDeletion.java |  9 +++--
 .../org/apache/hadoop/ozone/web/client/TestKeys.java   | 11 ---
 5 files changed, 27 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2172399c/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/GrpcXceiverService.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/GrpcXceiverService.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/GrpcXceiverService.java
index df6220c..db4a86a 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/GrpcXceiverService.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/GrpcXceiverService.java
@@ -56,10 +56,8 @@ public class GrpcXceiverService extends
   ContainerCommandResponseProto resp = dispatcher.dispatch(request);
   responseObserver.onNext(resp);
 } catch (Throwable e) {
-  if (LOG.isDebugEnabled()) {
-LOG.debug("{} got exception when processing"
+  LOG.error("{} got exception when processing"
 + " ContainerCommandRequestProto {}: {}", request, e);
-  }
   responseObserver.onError(e);
 }
   }
@@ -67,13 +65,13 @@ public class GrpcXceiverService extends
   @Override
   public void onError(Throwable t) {
 // for now we just log a msg
-LOG.info("{}: ContainerCommand send on error. Exception: {}", t);
+LOG.error("{}: ContainerCommand send on error. Exception: {}", t);
   }
 
   @Override
   public void onCompleted() {
 if (isClosed.compareAndSet(false, true)) {
-  LOG.info("{}: ContainerCommand send completed");
+  LOG.debug("{}: ContainerCommand send completed");
   responseObserver.onCompleted();
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2172399c/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java
index b568672..ae6a91e 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java
@@ -152,7 +152,8 @@ public interface MiniOzoneCluster {
*
* @param i index of HddsDatanode in the MiniOzoneCluster
*/
-  void restartHddsDatanode(int i);
+  void restartHddsDatanode(int i) throws InterruptedException,
+  TimeoutException;
 
   /**
* Shutdown a particular HddsDatanode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2172399c/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
index 9b7e399..e06e2f6 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
@@ -216,7 +216,8 @@ public final class MiniOzoneClusterImpl implements 
MiniOzoneCluster {
   }
 
   @Override
-  public void restartHddsDatanode(int i) {
+  public void restartHddsDatanode(int i) throws InterruptedException,
+  TimeoutException {
 HddsDatanodeService datanodeService = 

[08/47] hadoop git commit: HADOOP-15633. fs.TrashPolicyDefault: Can't create trash directory. Contributed by Fei Hui.

2018-08-31 Thread xkrogen
HADOOP-15633. fs.TrashPolicyDefault: Can't create trash directory. Contributed 
by Fei Hui.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f9c6fd94
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f9c6fd94
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f9c6fd94

Branch: refs/heads/HDFS-12943
Commit: f9c6fd94711458b77ecf3fa425aad7fda5089376
Parents: 6eecd25
Author: John Zhuge 
Authored: Mon Aug 27 09:22:59 2018 -0700
Committer: John Zhuge 
Committed: Mon Aug 27 09:22:59 2018 -0700

--
 .../apache/hadoop/fs/TrashPolicyDefault.java| 14 +
 .../java/org/apache/hadoop/fs/TestTrash.java| 54 
 2 files changed, 68 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f9c6fd94/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
index 265e967..9c6a685 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
@@ -148,6 +148,20 @@ public class TrashPolicyDefault extends TrashPolicy {
   LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
   return false;
 }
+  } catch (FileAlreadyExistsException e) {
+// find the path which is not a directory, and modify baseTrashPath
+// & trashPath, then mkdirs
+Path existsFilePath = baseTrashPath;
+while (!fs.exists(existsFilePath)) {
+  existsFilePath = existsFilePath.getParent();
+}
+baseTrashPath = new Path(baseTrashPath.toString().replace(
+existsFilePath.toString(), existsFilePath.toString() + Time.now())
+);
+trashPath = new Path(baseTrashPath, trashPath.getName());
+// retry, ignore current failure
+--i;
+continue;
   } catch (IOException e) {
 LOG.warn("Can't create trash directory: " + baseTrashPath, e);
 cause = e;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f9c6fd94/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
index fa2d21f..568821b 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
@@ -518,6 +518,60 @@ public class TestTrash {
   }
 
   @Test
+  public void testExistingFileTrash() throws IOException {
+Configuration conf = new Configuration();
+conf.setClass("fs.file.impl", TestLFS.class, FileSystem.class);
+FileSystem fs = FileSystem.getLocal(conf);
+conf.set("fs.defaultFS", fs.getUri().toString());
+conf.setLong(FS_TRASH_INTERVAL_KEY, 0); // disabled
+assertFalse(new Trash(conf).isEnabled());
+
+conf.setLong(FS_TRASH_INTERVAL_KEY, 10); // 10 minute
+assertTrue(new Trash(conf).isEnabled());
+
+FsShell shell = new FsShell();
+shell.setConf(conf);
+
+// First create a new directory with mkdirs
+Path myPath = new Path(TEST_DIR, "test/mkdirs");
+mkdir(fs, myPath);
+
+// Second, create a file in that directory.
+Path myFile = new Path(TEST_DIR, "test/mkdirs/myExistingFile");
+writeFile(fs, myFile, 10);
+// First rm a file
+mkdir(fs, myPath);
+writeFile(fs, myFile, 10);
+
+String[] args1 = new String[2];
+args1[0] = "-rm";
+args1[1] = myFile.toString();
+int val1 = -1;
+try {
+  val1 = shell.run(args1);
+} catch (Exception e) {
+  System.err.println("Exception raised from Trash.run " +
+  e.getLocalizedMessage());
+}
+assertTrue(val1 == 0);
+
+// Second  rm a file which parent path is the same as above
+mkdir(fs, myFile);
+writeFile(fs, new Path(myFile, "mySubFile"), 10);
+String[] args2 = new String[2];
+args2[0] = "-rm";
+args2[1] = new Path(myFile, "mySubFile").toString();
+int val2 = -1;
+try {
+  val2 = shell.run(args2);
+} catch (Exception e) {
+  System.err.println("Exception raised from Trash.run " +
+  e.getLocalizedMessage());
+}
+assertTrue(val2 == 0);
+  }
+
+  @Test
   public void testNonDefaultFS() throws IOException {
 

[03/47] hadoop git commit: HDDS-374. Support to configure container size in units lesser than GB. Contributed by Nanda kumar.

2018-08-31 Thread xkrogen
HDDS-374. Support to configure container size in units lesser than GB. 
Contributed by Nanda kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/12b2f362
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/12b2f362
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/12b2f362

Branch: refs/heads/HDFS-12943
Commit: 12b2f362cc9a370904da724a68ba015cd3a99eff
Parents: 91836f0
Author: Nanda kumar 
Authored: Mon Aug 27 18:29:32 2018 +0530
Committer: Nanda kumar 
Committed: Mon Aug 27 18:29:32 2018 +0530

--
 .../org/apache/hadoop/ozone/OzoneConsts.java|  2 +-
 .../container/common/impl/ContainerData.java| 24 ++--
 .../common/impl/ContainerDataYaml.java  |  5 ++--
 .../container/common/impl/HddsDispatcher.java   |  6 +
 .../container/keyvalue/KeyValueContainer.java   |  2 +-
 .../keyvalue/KeyValueContainerData.java | 10 
 .../container/keyvalue/KeyValueHandler.java | 15 ++--
 .../common/TestKeyValueContainerData.java   |  5 ++--
 .../common/impl/TestContainerDataYaml.java  |  7 +++---
 .../container/common/impl/TestContainerSet.java |  7 --
 .../common/impl/TestHddsDispatcher.java |  3 ++-
 .../keyvalue/TestChunkManagerImpl.java  |  4 +++-
 .../container/keyvalue/TestKeyManagerImpl.java  |  4 +++-
 .../keyvalue/TestKeyValueBlockIterator.java |  4 +++-
 .../keyvalue/TestKeyValueContainer.java | 13 +++
 .../container/keyvalue/TestKeyValueHandler.java |  4 +++-
 .../container/ozoneimpl/TestOzoneContainer.java |  5 ++--
 .../test/resources/additionalfields.container   |  4 ++--
 .../test/resources/incorrect.checksum.container |  2 +-
 .../src/test/resources/incorrect.container  |  2 +-
 .../ozone/container/ContainerTestHelper.java|  4 +++-
 .../common/TestBlockDeletingService.java|  2 +-
 .../TestContainerDeletionChoosingPolicy.java|  8 +++
 .../common/impl/TestContainerPersistence.java   |  3 +--
 24 files changed, 79 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/12b2f362/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
index f912f02..320a3ed 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
@@ -185,7 +185,7 @@ public final class OzoneConsts {
   public static final String CONTAINER_TYPE = "containerType";
   public static final String STATE = "state";
   public static final String METADATA = "metadata";
-  public static final String MAX_SIZE_GB = "maxSizeGB";
+  public static final String MAX_SIZE = "maxSize";
   public static final String METADATA_PATH = "metadataPath";
   public static final String CHUNKS_PATH = "chunksPath";
   public static final String CONTAINER_DB_TYPE = "containerDBType";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/12b2f362/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
index afd1407..efea20b 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
@@ -40,7 +40,7 @@ import static org.apache.hadoop.ozone.OzoneConsts.CHECKSUM;
 import static org.apache.hadoop.ozone.OzoneConsts.CONTAINER_ID;
 import static org.apache.hadoop.ozone.OzoneConsts.CONTAINER_TYPE;
 import static org.apache.hadoop.ozone.OzoneConsts.LAYOUTVERSION;
-import static org.apache.hadoop.ozone.OzoneConsts.MAX_SIZE_GB;
+import static org.apache.hadoop.ozone.OzoneConsts.MAX_SIZE;
 import static org.apache.hadoop.ozone.OzoneConsts.METADATA;
 import static org.apache.hadoop.ozone.OzoneConsts.STATE;
 
@@ -67,7 +67,7 @@ public abstract class ContainerData {
   // State of the Container
   private ContainerLifeCycleState state;
 
-  private final int maxSizeGB;
+  private final long maxSize;
 
   /** parameters for read/write statistics on the container. **/
   private final AtomicLong readBytes;
@@ -92,16 +92,16 @@ public abstract class ContainerData {
   LAYOUTVERSION,
   STATE,
   METADATA,
-  

[32/47] hadoop git commit: YARN-8723. Fix a typo in CS init error message when resource calculator is not correctly set. Contributed by Abhishek Modi.

2018-08-31 Thread xkrogen
YARN-8723. Fix a typo in CS init error message when resource calculator is not 
correctly set. Contributed by Abhishek Modi.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3fa46394
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3fa46394
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3fa46394

Branch: refs/heads/HDFS-12943
Commit: 3fa46394214181ed1cc7f06b886282bbdf67a10f
Parents: 64ad029
Author: Weiwei Yang 
Authored: Wed Aug 29 10:46:13 2018 +0800
Committer: Weiwei Yang 
Committed: Wed Aug 29 11:13:44 2018 +0800

--
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3fa46394/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index dec1301..81dcf86 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -348,7 +348,7 @@ public class CapacityScheduler extends
 throw new YarnRuntimeException("RM uses DefaultResourceCalculator 
which"
 + " used only memory as resource-type but invalid resource-types"
 + " specified " + ResourceUtils.getResourceTypes() + ". Use"
-+ " DomainantResourceCalculator instead to make effective use of"
++ " DominantResourceCalculator instead to make effective use of"
 + " these resource-types");
   }
   this.usePortForNodeName = this.conf.getUsePortForNodeName();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[44/47] hadoop git commit: HDFS-13027. Handle possible NPEs due to deleted blocks in race condition. Contributed by Vinayakumar B.

2018-08-31 Thread xkrogen
HDFS-13027. Handle possible NPEs due to deleted blocks in race condition. 
Contributed by Vinayakumar B.

(cherry picked from commit 65977e5d8124be2bc208af25beed934933f170b3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c36d69a7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c36d69a7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c36d69a7

Branch: refs/heads/HDFS-12943
Commit: c36d69a7b30927eaea16335e06cfcc247accde35
Parents: f2c2a68
Author: Vinayakumar B 
Authored: Wed Aug 29 22:40:13 2018 +0530
Committer: Vinayakumar B 
Committed: Thu Aug 30 22:15:51 2018 +0530

--
 .../apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java| 2 +-
 .../apache/hadoop/hdfs/server/blockmanagement/BlockManager.java | 4 
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java| 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java| 5 -
 4 files changed, 10 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c36d69a7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index 43f4f47..d160f61 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -52,7 +52,7 @@ public abstract class BlockInfo extends Block
   /**
* Block collection ID.
*/
-  private long bcId;
+  private volatile long bcId;
 
   /** For implementing {@link LightWeightGSet.LinkedElement} interface. */
   private LightWeightGSet.LinkedElement nextLinkedElement;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c36d69a7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 17f6f6e..675221a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -4171,6 +4171,10 @@ public class BlockManager implements BlockStatsMXBean {
 int numExtraRedundancy = 0;
 while(it.hasNext()) {
   final BlockInfo block = it.next();
+  if (block.isDeleted()) {
+//Orphan block, will be handled eventually, skip
+continue;
+  }
   int expectedReplication = this.getExpectedRedundancyNum(block);
   NumberReplicas num = countNodes(block);
   if (shouldProcessExtraRedundancy(num, expectedReplication)) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c36d69a7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 6ba0e0b..74c9f10 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -4128,7 +4128,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 while (it.hasNext()) {
   Block b = it.next();
   BlockInfo blockInfo = blockManager.getStoredBlock(b);
-  if (blockInfo == null) {
+  if (blockInfo == null || blockInfo.isDeleted()) {
 LOG.info("Cannot find block info for block " + b);
   } else {
 BlockCollection bc = getBlockCollection(blockInfo);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c36d69a7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 

[39/47] hadoop git commit: HADOOP-15698. KMS log4j is not initialized properly at startup. Contributed by Kitti Nanasi.

2018-08-31 Thread xkrogen
HADOOP-15698. KMS log4j is not initialized properly at startup. Contributed by 
Kitti Nanasi.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/781437c2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/781437c2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/781437c2

Branch: refs/heads/HDFS-12943
Commit: 781437c219dc3422797a32dc7ba72cd4f5ee38e2
Parents: 582cb10
Author: Kitti Nanasi 
Authored: Wed Aug 29 22:06:36 2018 -0700
Committer: Xiao Chen 
Committed: Wed Aug 29 22:07:49 2018 -0700

--
 .../crypto/key/kms/server/KMSConfiguration.java | 31 
 .../hadoop/crypto/key/kms/server/KMSWebApp.java | 38 +---
 .../crypto/key/kms/server/KMSWebServer.java |  1 +
 3 files changed, 33 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/781437c2/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSConfiguration.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSConfiguration.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSConfiguration.java
index 18eec19..35ffb42 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSConfiguration.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSConfiguration.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.crypto.key.kms.server;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.log4j.PropertyConfigurator;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -103,6 +104,8 @@ public class KMSConfiguration {
 
   public static final boolean KEY_AUTHORIZATION_ENABLE_DEFAULT = true;
 
+  private static final String LOG4J_PROPERTIES = "kms-log4j.properties";
+
   static {
 Configuration.addDefaultResource(KMS_DEFAULT_XML);
 Configuration.addDefaultResource(KMS_SITE_XML);
@@ -159,4 +162,32 @@ public class KMSConfiguration {
 }
 return newer;
   }
+
+  public static void initLogging() {
+String confDir = System.getProperty(KMS_CONFIG_DIR);
+if (confDir == null) {
+  throw new RuntimeException("System property '" +
+  KMSConfiguration.KMS_CONFIG_DIR + "' not defined");
+}
+if (System.getProperty("log4j.configuration") == null) {
+  System.setProperty("log4j.defaultInitOverride", "true");
+  boolean fromClasspath = true;
+  File log4jConf = new File(confDir, LOG4J_PROPERTIES).getAbsoluteFile();
+  if (log4jConf.exists()) {
+PropertyConfigurator.configureAndWatch(log4jConf.getPath(), 1000);
+fromClasspath = false;
+  } else {
+ClassLoader cl = Thread.currentThread().getContextClassLoader();
+URL log4jUrl = cl.getResource(LOG4J_PROPERTIES);
+if (log4jUrl != null) {
+  PropertyConfigurator.configure(log4jUrl);
+}
+  }
+  LOG.debug("KMS log starting");
+  if (fromClasspath) {
+LOG.warn("Log4j configuration file '{}' not found", LOG4J_PROPERTIES);
+LOG.warn("Logging with INFO level to standard output");
+  }
+}
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/781437c2/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
index cb4bf7e..0640e25 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
@@ -17,10 +17,8 @@
  */
 package org.apache.hadoop.crypto.key.kms.server;
 
-import java.io.File;
 import java.io.IOException;
 import java.net.URI;
-import java.net.URL;
 
 import javax.servlet.ServletContextEvent;
 import javax.servlet.ServletContextListener;
@@ -37,14 +35,13 @@ import 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
 import org.apache.hadoop.crypto.key.KeyProviderFactory;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.VersionInfo;
-import org.apache.log4j.PropertyConfigurator;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 @InterfaceAudience.Private
 public class KMSWebApp implements ServletContextListener {
 
-  private static final 

[29/47] hadoop git commit: YARN-8697. LocalityMulticastAMRMProxyPolicy should fallback to random sub-cluster when cannot resolve resource. Contributed by Botong Huang.

2018-08-31 Thread xkrogen
YARN-8697. LocalityMulticastAMRMProxyPolicy should fallback to random 
sub-cluster when cannot resolve resource. Contributed by Botong Huang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7ed458b2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7ed458b2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7ed458b2

Branch: refs/heads/HDFS-12943
Commit: 7ed458b255e492fd5bc2ca36f216ff1b16054db7
Parents: 3e18b95
Author: Giovanni Matteo Fumarola 
Authored: Tue Aug 28 16:01:35 2018 -0700
Committer: Giovanni Matteo Fumarola 
Committed: Tue Aug 28 16:01:35 2018 -0700

--
 .../LocalityMulticastAMRMProxyPolicy.java   | 105 +++
 .../TestLocalityMulticastAMRMProxyPolicy.java   |  53 --
 2 files changed, 125 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7ed458b2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/amrmproxy/LocalityMulticastAMRMProxyPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/amrmproxy/LocalityMulticastAMRMProxyPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/amrmproxy/LocalityMulticastAMRMProxyPolicy.java
index 1ccd61c..e5f26d8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/amrmproxy/LocalityMulticastAMRMProxyPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/amrmproxy/LocalityMulticastAMRMProxyPolicy.java
@@ -21,8 +21,11 @@ package 
org.apache.hadoop.yarn.server.federation.policies.amrmproxy;
 import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.HashSet;
+import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Random;
 import java.util.Set;
 import java.util.TreeMap;
 import java.util.concurrent.ConcurrentHashMap;
@@ -123,6 +126,8 @@ public class LocalityMulticastAMRMProxyPolicy extends 
AbstractAMRMProxyPolicy {
   public static final Logger LOG =
   LoggerFactory.getLogger(LocalityMulticastAMRMProxyPolicy.class);
 
+  private static Random rand = new Random();
+
   private Map weights;
   private SubClusterResolver resolver;
 
@@ -275,26 +280,18 @@ public class LocalityMulticastAMRMProxyPolicy extends 
AbstractAMRMProxyPolicy {
   }
 
   // Handle node/rack requests that the SubClusterResolver cannot map to
-  // any cluster. Defaulting to home subcluster.
+  // any cluster. Pick a random sub-cluster from active and enabled ones.
+  targetId = getSubClusterForUnResolvedRequest(bookkeeper,
+  rr.getAllocationRequestId());
   if (LOG.isDebugEnabled()) {
 LOG.debug("ERROR resolving sub-cluster for resourceName: "
-+ rr.getResourceName() + " we are falling back to homeSubCluster:"
-+ homeSubcluster);
++ rr.getResourceName() + ", picked a random subcluster to forward:"
++ targetId);
   }
-
-  // If home-subcluster is not active, ignore node/rack request
-  if (bookkeeper.isActiveAndEnabled(homeSubcluster)) {
-if (targetIds != null && targetIds.size() > 0) {
-  bookkeeper.addRackRR(homeSubcluster, rr);
-} else {
-  bookkeeper.addLocalizedNodeRR(homeSubcluster, rr);
-}
+  if (targetIds != null && targetIds.size() > 0) {
+bookkeeper.addRackRR(targetId, rr);
   } else {
-if (LOG.isDebugEnabled()) {
-  LOG.debug("The homeSubCluster (" + homeSubcluster + ") we are "
-  + "defaulting to is not active, the ResourceRequest "
-  + "will be ignored.");
-}
+bookkeeper.addLocalizedNodeRR(targetId, rr);
   }
 }
 
@@ -314,6 +311,14 @@ public class LocalityMulticastAMRMProxyPolicy extends 
AbstractAMRMProxyPolicy {
   }
 
   /**
+   * For unit test to override.
+   */
+  protected SubClusterId getSubClusterForUnResolvedRequest(
+  AllocationBookkeeper bookKeeper, long allocationId) {
+return bookKeeper.getSubClusterForUnResolvedRequest(allocationId);
+  }
+
+  /**
* It splits a list of non-localized resource requests among sub-clusters.
*/
   private void splitAnyRequests(List originalResourceRequests,
@@ -512,10 +517,11 @@ public class LocalityMulticastAMRMProxyPolicy extends 
AbstractAMRMProxyPolicy {
* This 

[04/47] hadoop git commit: HDDS-313. Add metrics to containerState Machine. Contributed by chencan.

2018-08-31 Thread xkrogen
HDDS-313. Add metrics to containerState Machine. Contributed by chencan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/744ce200
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/744ce200
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/744ce200

Branch: refs/heads/HDFS-12943
Commit: 744ce200d20a8f33b1dff1ad561843410c722501
Parents: 12b2f36
Author: Márton Elek 
Authored: Mon Aug 27 15:42:22 2018 +0200
Committer: Márton Elek 
Committed: Mon Aug 27 15:51:34 2018 +0200

--
 .../transport/server/ratis/CSMMetrics.java  | 115 +++
 .../server/ratis/ContainerStateMachine.java |  15 ++
 .../transport/server/ratis/TestCSMMetrics.java  | 202 +++
 3 files changed, 332 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/744ce200/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
new file mode 100644
index 000..b6aed60
--- /dev/null
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.container.common.transport.server.ratis;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Container State Machine statistics.
+ */
+@InterfaceAudience.Private
+@Metrics(about="Container State Machine Metrics", context="dfs")
+public class CSMMetrics {
+  public static final String SOURCE_NAME =
+  CSMMetrics.class.getSimpleName();
+
+  // ratis op metrics metrics
+  private @Metric MutableCounterLong numWriteStateMachineOps;
+  private @Metric MutableCounterLong numReadStateMachineOps;
+  private @Metric MutableCounterLong numApplyTransactionOps;
+
+  // Failure Metrics
+  private @Metric MutableCounterLong numWriteStateMachineFails;
+  private @Metric MutableCounterLong numReadStateMachineFails;
+  private @Metric MutableCounterLong numApplyTransactionFails;
+
+  public CSMMetrics() {
+  }
+
+  public static CSMMetrics create() {
+MetricsSystem ms = DefaultMetricsSystem.instance();
+return ms.register(SOURCE_NAME,
+"Container State Machine",
+new CSMMetrics());
+  }
+
+  public void incNumWriteStateMachineOps() {
+numWriteStateMachineOps.incr();
+  }
+
+  public void incNumReadStateMachineOps() {
+numReadStateMachineOps.incr();
+  }
+
+  public void incNumApplyTransactionsOps() {
+numApplyTransactionOps.incr();
+  }
+
+  public void incNumWriteStateMachineFails() {
+numWriteStateMachineFails.incr();
+  }
+
+  public void incNumReadStateMachineFails() {
+numReadStateMachineFails.incr();
+  }
+
+  public void incNumApplyTransactionsFails() {
+numApplyTransactionFails.incr();
+  }
+
+  @VisibleForTesting
+  public long getNumWriteStateMachineOps() {
+return numWriteStateMachineOps.value();
+  }
+
+  @VisibleForTesting
+  public long getNumReadStateMachineOps() {
+return numReadStateMachineOps.value();
+  }
+
+  @VisibleForTesting
+  public long getNumApplyTransactionsOps() {
+return numApplyTransactionOps.value();
+  }
+
+  @VisibleForTesting
+  public long getNumWriteStateMachineFails() {
+return numWriteStateMachineFails.value();
+  }
+
+  @VisibleForTesting
+  public long 

[17/47] hadoop git commit: HDFS-13858. RBF: Add check to have single valid argument to safemode command. Contributed by Ayush Saxena.

2018-08-31 Thread xkrogen
HDFS-13858. RBF: Add check to have single valid argument to safemode command. 
Contributed by Ayush Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/75691ad6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/75691ad6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/75691ad6

Branch: refs/heads/HDFS-12943
Commit: 75691ad600473d4d315434b0876d6d10d3050a6b
Parents: 3974427
Author: Vinayakumar B 
Authored: Tue Aug 28 09:21:07 2018 +0530
Committer: Vinayakumar B 
Committed: Tue Aug 28 09:21:07 2018 +0530

--
 .../hadoop/hdfs/tools/federation/RouterAdmin.java |  6 ++
 .../server/federation/router/TestRouterAdminCLI.java  | 14 ++
 2 files changed, 20 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/75691ad6/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index 91e1669..f88d0a6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -218,6 +218,10 @@ public class RouterAdmin extends Configured implements 
Tool {
   "Successfully clear quota for mount point " + argv[i]);
 }
   } else if ("-safemode".equals(cmd)) {
+if (argv.length > 2) {
+  throw new IllegalArgumentException(
+  "Too many arguments, Max=1 argument allowed only");
+}
 manageSafeMode(argv[i]);
   } else if ("-nameservice".equals(cmd)) {
 String subcmd = argv[i];
@@ -712,6 +716,8 @@ public class RouterAdmin extends Configured implements Tool 
{
 } else if (cmd.equals("get")) {
   boolean result = getSafeMode();
   System.out.println("Safe Mode: " + result);
+} else {
+  throw new IllegalArgumentException("Invalid argument: " + cmd);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/75691ad6/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
index 2da5fb9..2682e9a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
@@ -519,6 +519,7 @@ public class TestRouterAdminCLI {
 assertTrue(routerContext.getRouter().getSafemodeService().isInSafeMode());
 
 System.setOut(new PrintStream(out));
+System.setErr(new PrintStream(err));
 assertEquals(0, ToolRunner.run(admin,
 new String[] {"-safemode", "get"}));
 assertTrue(out.toString().contains("true"));
@@ -534,6 +535,19 @@ public class TestRouterAdminCLI {
 assertEquals(0, ToolRunner.run(admin,
 new String[] {"-safemode", "get"}));
 assertTrue(out.toString().contains("false"));
+
+out.reset();
+assertEquals(-1, ToolRunner.run(admin,
+new String[] {"-safemode", "get", "-random", "check" }));
+assertTrue(err.toString(), err.toString()
+.contains("safemode: Too many arguments, Max=1 argument allowed 
only"));
+err.reset();
+
+assertEquals(-1,
+ToolRunner.run(admin, new String[] {"-safemode", "check" }));
+assertTrue(err.toString(),
+err.toString().contains("safemode: Invalid argument: check"));
+err.reset();
   }
 
   @Test


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[20/47] hadoop git commit: HDDS-332. Remove the ability to configure ozone.handler.type Contributed by Nandakumar and Anu Engineer.

2018-08-31 Thread xkrogen
HDDS-332. Remove the ability to configure ozone.handler.type
Contributed by Nandakumar and Anu Engineer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/df21e1b1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/df21e1b1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/df21e1b1

Branch: refs/heads/HDFS-12943
Commit: df21e1b1ddcc8439b5fa1bb79388403f87742e65
Parents: 2172399
Author: Anu Engineer 
Authored: Tue Aug 28 09:56:02 2018 -0700
Committer: Anu Engineer 
Committed: Tue Aug 28 09:56:02 2018 -0700

--
 .../apache/hadoop/ozone/OzoneConfigKeys.java|7 -
 .../org/apache/hadoop/ozone/OzoneConsts.java|1 -
 .../common/src/main/resources/ozone-default.xml |   21 -
 .../apache/hadoop/ozone/RatisTestHelper.java|8 +-
 .../ozone/client/rest/TestOzoneRestClient.java  |7 +-
 .../rpc/TestCloseContainerHandlingByClient.java |2 -
 .../ozone/client/rpc/TestOzoneRpcClient.java|9 +-
 .../ozone/container/ContainerTestHelper.java|   10 -
 .../TestContainerDeletionChoosingPolicy.java|8 +-
 .../common/impl/TestContainerPersistence.java   |  116 +-
 .../commandhandler/TestBlockDeletion.java   |8 +-
 .../TestCloseContainerByPipeline.java   |   35 +-
 .../container/ozoneimpl/TestOzoneContainer.java |2 -
 .../ozoneimpl/TestOzoneContainerRatis.java  |2 -
 .../container/ozoneimpl/TestRatisManager.java   |2 -
 .../hadoop/ozone/freon/TestDataValidate.java|7 +-
 .../apache/hadoop/ozone/freon/TestFreon.java|3 +-
 .../ozone/om/TestContainerReportWithKeys.java   |   12 +-
 .../om/TestMultipleContainerReadWrite.java  |5 +-
 .../hadoop/ozone/om/TestOmBlockVersioning.java  |7 +-
 .../apache/hadoop/ozone/om/TestOmMetrics.java   |7 +-
 .../apache/hadoop/ozone/om/TestOmSQLCli.java|6 +-
 .../hadoop/ozone/om/TestOzoneManager.java   |5 +-
 .../hadoop/ozone/ozShell/TestOzoneShell.java|   20 +-
 .../ozone/web/TestDistributedOzoneVolumes.java  |  188 ---
 .../hadoop/ozone/web/TestLocalOzoneVolumes.java |  187 ---
 .../hadoop/ozone/web/TestOzoneVolumes.java  |  183 +++
 .../hadoop/ozone/web/TestOzoneWebAccess.java|   10 +-
 .../hadoop/ozone/web/client/TestBuckets.java|9 +-
 .../hadoop/ozone/web/client/TestKeysRatis.java  |4 +-
 .../ozone/web/client/TestOzoneClient.java   |3 -
 .../hadoop/ozone/web/client/TestVolume.java |   11 +-
 .../ozone/web/client/TestVolumeRatis.java   |3 -
 .../server/datanode/ObjectStoreHandler.java |  182 ++-
 .../web/handlers/StorageHandlerBuilder.java |   18 +-
 .../web/localstorage/LocalStorageHandler.java   |  385 --
 .../web/localstorage/OzoneMetadataManager.java  | 1138 --
 .../hadoop/fs/ozone/TestOzoneFSInputStream.java |6 +-
 38 files changed, 363 insertions(+), 2274 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/df21e1b1/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index 92f0c41..6ad9085 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -66,16 +66,9 @@ public final class OzoneConfigKeys {
   "dfs.container.ratis.ipc.random.port";
   public static final boolean DFS_CONTAINER_RATIS_IPC_RANDOM_PORT_DEFAULT =
   false;
-
-  public static final String OZONE_LOCALSTORAGE_ROOT =
-  "ozone.localstorage.root";
-  public static final String OZONE_LOCALSTORAGE_ROOT_DEFAULT = "/tmp/ozone";
   public static final String OZONE_ENABLED =
   "ozone.enabled";
   public static final boolean OZONE_ENABLED_DEFAULT = false;
-  public static final String OZONE_HANDLER_TYPE_KEY =
-  "ozone.handler.type";
-  public static final String OZONE_HANDLER_TYPE_DEFAULT = "distributed";
   public static final String OZONE_TRACE_ENABLED_KEY =
   "ozone.trace.enabled";
   public static final boolean OZONE_TRACE_ENABLED_DEFAULT = false;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/df21e1b1/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
index 320a3ed..ab6df92 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
+++ 

[10/47] hadoop git commit: HDFS-13849. Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client. Contributed by Ian Pickering.

2018-08-31 Thread xkrogen
HDFS-13849. Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, 
hadoop-hdfs-rbf, hadoop-hdfs-native-client. Contributed by Ian Pickering.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7b1fa569
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7b1fa569
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7b1fa569

Branch: refs/heads/HDFS-12943
Commit: 7b1fa5693efc687492776d43ab482601cbb30dfd
Parents: e8b063f
Author: Giovanni Matteo Fumarola 
Authored: Mon Aug 27 10:18:05 2018 -0700
Committer: Giovanni Matteo Fumarola 
Committed: Mon Aug 27 10:18:05 2018 -0700

--
 .../src/main/native/fuse-dfs/test/TestFuseDFS.java |  6 +++---
 .../apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java |  7 ---
 .../apache/hadoop/hdfs/nfs/nfs3/AsyncDataService.java  |  6 +++---
 .../org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java   |  4 ++--
 .../apache/hadoop/hdfs/nfs/nfs3/OpenFileCtxCache.java  | 13 +++--
 .../hdfs/nfs/nfs3/PrivilegedNfsGatewayStarter.java |  7 ---
 .../java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java |  6 +++---
 .../org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java  |  6 +++---
 .../java/org/apache/hadoop/hdfs/nfs/TestMountd.java|  6 +++---
 .../apache/hadoop/hdfs/nfs/TestOutOfOrderWrite.java|  9 +
 .../federation/router/RouterPermissionChecker.java |  7 ---
 .../hdfs/server/federation/store/RecordStore.java  |  6 +++---
 12 files changed, 44 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b1fa569/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/TestFuseDFS.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/TestFuseDFS.java
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/TestFuseDFS.java
index a5d9abd..dabbe00 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/TestFuseDFS.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/TestFuseDFS.java
@@ -22,8 +22,8 @@ import java.util.ArrayList;
 import java.util.concurrent.atomic.*;
 
 import org.apache.log4j.Level;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.*;
 import org.apache.hadoop.fs.permission.*;
@@ -48,7 +48,7 @@ public class TestFuseDFS {
   private static Runtime r;
   private static String mountPoint;
 
-  private static final Log LOG = LogFactory.getLog(TestFuseDFS.class);
+  private static final Logger LOG = LoggerFactory.getLogger(TestFuseDFS.class);
   {
 GenericTestUtils.setLogLevel(LOG, Level.ALL);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b1fa569/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
index 4ae51c6..2721395 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
@@ -26,8 +26,8 @@ import java.util.Collections;
 import java.util.List;
 import java.util.HashMap;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.hdfs.DFSClient;
 import org.apache.hadoop.hdfs.nfs.conf.NfsConfigKeys;
@@ -61,7 +61,8 @@ import com.google.common.annotations.VisibleForTesting;
  * RPC program corresponding to mountd daemon. See {@link Mountd}.
  */
 public class RpcProgramMountd extends RpcProgram implements MountInterface {
-  private static final Log LOG = LogFactory.getLog(RpcProgramMountd.class);
+  private static final Logger LOG =
+  LoggerFactory.getLogger(RpcProgramMountd.class);
   public static final int PROGRAM = 15;
   public static final int VERSION_1 = 1;
   public static final int VERSION_2 = 2;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b1fa569/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/AsyncDataService.java
--
diff --git 

[07/47] hadoop git commit: HDDS-377. Make the ScmClient closable and stop the started threads. Contributed by Elek Marton.

2018-08-31 Thread xkrogen
HDDS-377. Make the ScmClient closable and stop the started threads. Contributed 
by Elek Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6eecd251
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6eecd251
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6eecd251

Branch: refs/heads/HDFS-12943
Commit: 6eecd251d8cf92e9cd7567734cbf8b38857118fb
Parents: 84973d1
Author: Xiaoyu Yao 
Authored: Mon Aug 27 08:19:05 2018 -0700
Committer: Xiaoyu Yao 
Committed: Mon Aug 27 08:19:38 2018 -0700

--
 .../hadoop/hdds/scm/client/ContainerOperationClient.java| 9 +
 .../java/org/apache/hadoop/hdds/scm/client/ScmClient.java   | 9 ++---
 2 files changed, 15 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6eecd251/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
index faa1ec6..8c8cb95 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
@@ -257,6 +257,15 @@ public class ContainerOperationClient implements ScmClient 
{
 factor, nodePool);
   }
 
+  @Override
+  public void close() {
+try {
+  xceiverClientManager.close();
+} catch (Exception ex) {
+  LOG.error("Can't close " + this.getClass().getSimpleName(), ex);
+}
+  }
+
   /**
* Deletes an existing container.
*

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6eecd251/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
index 7955179..184c547 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
@@ -25,6 +25,7 @@ import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
 .ContainerData;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 
+import java.io.Closeable;
 import java.io.IOException;
 import java.util.List;
 
@@ -39,7 +40,7 @@ import java.util.List;
  * this interface will likely be removed.
  */
 @InterfaceStability.Unstable
-public interface ScmClient {
+public interface ScmClient extends Closeable {
   /**
* Creates a Container on SCM and returns the pipeline.
* @return ContainerInfo
@@ -61,7 +62,8 @@ public interface ScmClient {
* @return ContainerWithPipeline
* @throws IOException
*/
-  ContainerWithPipeline getContainerWithPipeline(long containerId) throws 
IOException;
+  ContainerWithPipeline getContainerWithPipeline(long containerId)
+  throws IOException;
 
   /**
* Close a container.
@@ -87,7 +89,8 @@ public interface ScmClient {
* @param force - true to forcibly delete the container.
* @throws IOException
*/
-  void deleteContainer(long containerId, Pipeline pipeline, boolean force) 
throws IOException;
+  void deleteContainer(long containerId, Pipeline pipeline, boolean force)
+  throws IOException;
 
   /**
* Deletes an existing container.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[16/47] hadoop git commit: HDDS-247. Handle CLOSED_CONTAINER_IO exception in ozoneClient. Contributed by Shashikant Banerjee.

2018-08-31 Thread xkrogen
HDDS-247. Handle CLOSED_CONTAINER_IO exception in ozoneClient. Contributed by 
Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3974427f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3974427f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3974427f

Branch: refs/heads/HDFS-12943
Commit: 3974427f67299496e13b04f0d006d367b705fcb5
Parents: 26c2a97
Author: Mukul Kumar Singh 
Authored: Tue Aug 28 07:11:36 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Tue Aug 28 07:12:07 2018 +0530

--
 .../hdds/scm/storage/ChunkOutputStream.java |  28 +-
 .../ozone/client/io/ChunkGroupOutputStream.java | 195 +++--
 .../hadoop/ozone/om/helpers/OmKeyInfo.java  |  23 +-
 .../rpc/TestCloseContainerHandlingByClient.java | 408 +++
 .../ozone/container/ContainerTestHelper.java|  21 +
 .../hadoop/ozone/om/TestOmBlockVersioning.java  |  16 +-
 6 files changed, 630 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3974427f/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
index 779e636..7309434 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
@@ -94,6 +94,10 @@ public class ChunkOutputStream extends OutputStream {
 this.chunkIndex = 0;
   }
 
+  public ByteBuffer getBuffer() {
+return buffer;
+  }
+
   @Override
   public synchronized void write(int b) throws IOException {
 checkOpen();
@@ -106,7 +110,8 @@ public class ChunkOutputStream extends OutputStream {
   }
 
   @Override
-  public void write(byte[] b, int off, int len) throws IOException {
+  public synchronized void write(byte[] b, int off, int len)
+  throws IOException {
 if (b == null) {
   throw new NullPointerException();
 }
@@ -143,24 +148,27 @@ public class ChunkOutputStream extends OutputStream {
 
   @Override
   public synchronized void close() throws IOException {
-if (xceiverClientManager != null && xceiverClient != null &&
-buffer != null) {
+if (xceiverClientManager != null && xceiverClient != null
+&& buffer != null) {
+  if (buffer.position() > 0) {
+writeChunkToContainer();
+  }
   try {
-if (buffer.position() > 0) {
-  writeChunkToContainer();
-}
 putKey(xceiverClient, containerKeyData.build(), traceID);
   } catch (IOException e) {
 throw new IOException(
 "Unexpected Storage Container Exception: " + e.toString(), e);
   } finally {
-xceiverClientManager.releaseClient(xceiverClient);
-xceiverClientManager = null;
-xceiverClient = null;
-buffer = null;
+cleanup();
   }
 }
+  }
 
+  public synchronized void cleanup() {
+xceiverClientManager.releaseClient(xceiverClient);
+xceiverClientManager = null;
+xceiverClient = null;
+buffer = null;
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3974427f/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
--
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
index 83b4dfd..988af07 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
@@ -27,6 +27,7 @@ import 
org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline
 import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
 import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ObjectStageChangeRequestProto;
 import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
@@ -46,8 +47,10 @@ import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.io.OutputStream;
+import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import 

[11/47] hadoop git commit: YARN-8705. Refactor the UAM heartbeat thread in preparation for YARN-8696. Contributed by Botong Huang.

2018-08-31 Thread xkrogen
YARN-8705. Refactor the UAM heartbeat thread in preparation for YARN-8696. 
Contributed by Botong Huang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f1525825
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f1525825
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f1525825

Branch: refs/heads/HDFS-12943
Commit: f1525825623a1307b5aa55c456b6afa3e0c61135
Parents: 7b1fa56
Author: Giovanni Matteo Fumarola 
Authored: Mon Aug 27 10:32:22 2018 -0700
Committer: Giovanni Matteo Fumarola 
Committed: Mon Aug 27 10:32:22 2018 -0700

--
 .../yarn/server/AMHeartbeatRequestHandler.java  | 227 +
 .../server/uam/UnmanagedApplicationManager.java | 170 ++---
 .../amrmproxy/FederationInterceptor.java| 245 +--
 3 files changed, 358 insertions(+), 284 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1525825/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMHeartbeatRequestHandler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMHeartbeatRequestHandler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMHeartbeatRequestHandler.java
new file mode 100644
index 000..42227bb
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMHeartbeatRequestHandler.java
@@ -0,0 +1,227 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.hadoop.yarn.server;
+
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest;
+import org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.utils.YarnServerSecurityUtils;
+import org.apache.hadoop.yarn.util.AsyncCallback;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+
+/**
+ * Extends Thread and provides an implementation that is used for processing 
the
+ * AM heart beat request asynchronously and sending back the response using the
+ * callback method registered with the system.
+ */
+public class AMHeartbeatRequestHandler extends Thread {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(AMHeartbeatRequestHandler.class);
+
+  // Indication flag for the thread to keep running
+  private volatile boolean keepRunning;
+
+  private Configuration conf;
+  private ApplicationId applicationId;
+
+  private BlockingQueue requestQueue;
+  private AMRMClientRelayer rmProxyRelayer;
+  private UserGroupInformation userUgi;
+  private int lastResponseId;
+
+  public AMHeartbeatRequestHandler(Configuration conf,
+  ApplicationId applicationId) {
+super("AMHeartbeatRequestHandler Heartbeat Handler Thread");
+this.setUncaughtExceptionHandler(
+new HeartBeatThreadUncaughtExceptionHandler());
+this.keepRunning = true;
+
+this.conf = conf;
+this.applicationId = applicationId;
+this.requestQueue = new LinkedBlockingQueue<>();
+
+resetLastResponseId();
+  }
+
+  /**
+   * Shutdown the thread.
+   */
+  public void shutdown() {
+this.keepRunning = false;
+this.interrupt();
+  }
+
+  @Override
+  public void run() {
+while (keepRunning) {
+  AsyncAllocateRequestInfo requestInfo;
+  try {
+requestInfo = requestQueue.take();
+if (requestInfo == null) {
+  throw new YarnException(
+  

[06/47] hadoop git commit: MAPREDUCE-6861. Add metrics tags for ShuffleClientMetrics. (Contributed by Zoltan Siegl)

2018-08-31 Thread xkrogen
MAPREDUCE-6861. Add metrics tags for ShuffleClientMetrics. (Contributed by 
Zoltan Siegl)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/84973d10
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/84973d10
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/84973d10

Branch: refs/heads/HDFS-12943
Commit: 84973d104917c0b8cbb187ee4f9868bbce967728
Parents: a813fd0
Author: Haibo Chen 
Authored: Mon Aug 27 16:53:06 2018 +0200
Committer: Haibo Chen 
Committed: Mon Aug 27 16:53:06 2018 +0200

--
 .../hadoop/mapreduce/task/reduce/Shuffle.java   | 24 ---
 .../task/reduce/ShuffleClientMetrics.java   | 43 ++-
 .../task/reduce/TestShuffleClientMetrics.java   | 75 
 3 files changed, 129 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/84973d10/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java
index 3382bbf..1aad71d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java
@@ -37,7 +37,8 @@ import org.apache.hadoop.util.Progress;
 @InterfaceAudience.LimitedPrivate({"MapReduce"})
 @InterfaceStability.Unstable
 @SuppressWarnings({"unchecked", "rawtypes"})
-public class Shuffle implements ShuffleConsumerPlugin, 
ExceptionReporter {
+public class Shuffle implements ShuffleConsumerPlugin,
+ExceptionReporter {
   private static final int PROGRESS_FREQUENCY = 2000;
   private static final int MAX_EVENTS_TO_FETCH = 1;
   private static final int MIN_EVENTS_TO_FETCH = 100;
@@ -51,7 +52,7 @@ public class Shuffle implements 
ShuffleConsumerPlugin, ExceptionRepo
   private ShuffleClientMetrics metrics;
   private TaskUmbilicalProtocol umbilical;
   
-  private ShuffleSchedulerImpl scheduler;
+  private ShuffleSchedulerImpl scheduler;
   private MergeManager merger;
   private Throwable throwable = null;
   private String throwingThreadName = null;
@@ -68,7 +69,8 @@ public class Shuffle implements 
ShuffleConsumerPlugin, ExceptionRepo
 this.jobConf = context.getJobConf();
 this.umbilical = context.getUmbilical();
 this.reporter = context.getReporter();
-this.metrics = ShuffleClientMetrics.create();
+this.metrics = ShuffleClientMetrics.create(context.getReduceId(),
+this.jobConf);
 this.copyPhase = context.getCopyPhase();
 this.taskStatus = context.getStatus();
 this.reduceTask = context.getReduceTask();
@@ -101,16 +103,16 @@ public class Shuffle implements 
ShuffleConsumerPlugin, ExceptionRepo
 int maxEventsToFetch = Math.min(MAX_EVENTS_TO_FETCH, eventsPerReducer);
 
 // Start the map-completion events fetcher thread
-final EventFetcher eventFetcher = 
-  new EventFetcher(reduceId, umbilical, scheduler, this,
-  maxEventsToFetch);
+final EventFetcher eventFetcher =
+new EventFetcher(reduceId, umbilical, scheduler, this,
+maxEventsToFetch);
 eventFetcher.start();
 
 // Start the map-output fetcher threads
 boolean isLocal = localMapFiles != null;
 final int numFetchers = isLocal ? 1 :
-  jobConf.getInt(MRJobConfig.SHUFFLE_PARALLEL_COPIES, 5);
-Fetcher[] fetchers = new Fetcher[numFetchers];
+jobConf.getInt(MRJobConfig.SHUFFLE_PARALLEL_COPIES, 5);
+Fetcher[] fetchers = new Fetcher[numFetchers];
 if (isLocal) {
   fetchers[0] = new LocalFetcher(jobConf, reduceId, scheduler,
   merger, reporter, metrics, this, reduceTask.getShuffleSecret(),
@@ -118,7 +120,7 @@ public class Shuffle implements 
ShuffleConsumerPlugin, ExceptionRepo
   fetchers[0].start();
 } else {
   for (int i=0; i < numFetchers; ++i) {
-fetchers[i] = new Fetcher(jobConf, reduceId, scheduler, merger, 
+fetchers[i] = new Fetcher(jobConf, reduceId, scheduler, merger,
reporter, metrics, this, 
reduceTask.getShuffleSecret());
 fetchers[i].start();
@@ -141,7 +143,7 @@ public class Shuffle implements 
ShuffleConsumerPlugin, ExceptionRepo
 eventFetcher.shutDown();
 
 // Stop the map-output fetcher 

[02/47] hadoop git commit: HDDS-334. Update GettingStarted page to mention details about Ozone GenConf tool. Contributed by Dinesh Chitlangia.

2018-08-31 Thread xkrogen
HDDS-334. Update GettingStarted page to mention details about Ozone GenConf 
tool. Contributed by Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/91836f0f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/91836f0f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/91836f0f

Branch: refs/heads/HDFS-12943
Commit: 91836f0f81a663d1efa9f29b1610bcdbe0cc79c1
Parents: b9b964d
Author: Márton Elek 
Authored: Mon Aug 27 11:41:08 2018 +0200
Committer: Márton Elek 
Committed: Mon Aug 27 11:41:08 2018 +0200

--
 hadoop-ozone/docs/content/GettingStarted.md | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/91836f0f/hadoop-ozone/docs/content/GettingStarted.md
--
diff --git a/hadoop-ozone/docs/content/GettingStarted.md 
b/hadoop-ozone/docs/content/GettingStarted.md
index 61d210a..4a57ada 100644
--- a/hadoop-ozone/docs/content/GettingStarted.md
+++ b/hadoop-ozone/docs/content/GettingStarted.md
@@ -127,9 +127,16 @@ be activated as part of the normal HDFS Datanode bootstrap.
 ```
 
 
- Create ozone-site.xml
+ Create/Generate ozone-site.xml
 
 Ozone relies on its own configuration file called `ozone-site.xml`.
+
+The following command will generate a template ozone-site.xml at the specified
+path
+```
+ozone genconf -output 
+```
+
 The following are the most important settings.
 
  1. _*ozone.enabled*_  This is the most important setting for ozone.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[15/47] hadoop git commit: HDFS-13838. WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status. Contributed by Siyao Meng.

2018-08-31 Thread xkrogen
HDFS-13838. WebHdfsFileSystem.getFileStatus() won't return correct "snapshot 
enabled" status. Contributed by Siyao Meng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/26c2a97c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/26c2a97c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/26c2a97c

Branch: refs/heads/HDFS-12943
Commit: 26c2a97c566969f50eb8e8432009724c51152a98
Parents: 602d138
Author: Wei-Chiu Chuang 
Authored: Mon Aug 27 16:02:35 2018 -0700
Committer: Wei-Chiu Chuang 
Committed: Mon Aug 27 16:02:35 2018 -0700

--
 .../java/org/apache/hadoop/hdfs/web/JsonUtilClient.java |  4 
 .../java/org/apache/hadoop/hdfs/web/TestWebHDFS.java| 12 
 2 files changed, 16 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/26c2a97c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
index 9bb1846..a685573 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
@@ -133,6 +133,7 @@ class JsonUtilClient {
 Boolean aclBit = (Boolean) m.get("aclBit");
 Boolean encBit = (Boolean) m.get("encBit");
 Boolean erasureBit  = (Boolean) m.get("ecBit");
+Boolean snapshotEnabledBit  = (Boolean) m.get("snapshotEnabled");
 EnumSet f =
 EnumSet.noneOf(HdfsFileStatus.Flags.class);
 if (aclBit != null && aclBit) {
@@ -144,6 +145,9 @@ class JsonUtilClient {
 if (erasureBit != null && erasureBit) {
   f.add(HdfsFileStatus.Flags.HAS_EC);
 }
+if (snapshotEnabledBit != null && snapshotEnabledBit) {
+  f.add(HdfsFileStatus.Flags.SNAPSHOT_ENABLED);
+}
 
 Map ecPolicyObj = (Map) m.get("ecPolicyObj");
 ErasureCodingPolicy ecPolicy = null;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/26c2a97c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
index cbc428a..9152636 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
@@ -482,6 +482,9 @@ public class TestWebHDFS {
 
   // allow snapshots on /bar using webhdfs
   webHdfs.allowSnapshot(bar);
+  // check if snapshot status is enabled
+  assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
+  assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
   webHdfs.createSnapshot(bar, "s1");
   final Path s1path = SnapshotTestHelper.getSnapshotRoot(bar, "s1");
   Assert.assertTrue(webHdfs.exists(s1path));
@@ -491,15 +494,24 @@ public class TestWebHDFS {
   assertEquals(bar, snapshottableDirs[0].getFullPath());
   dfs.deleteSnapshot(bar, "s1");
   dfs.disallowSnapshot(bar);
+  // check if snapshot status is disabled
+  assertFalse(dfs.getFileStatus(bar).isSnapshotEnabled());
+  assertFalse(webHdfs.getFileStatus(bar).isSnapshotEnabled());
   snapshottableDirs = dfs.getSnapshottableDirListing();
   assertNull(snapshottableDirs);
 
   // disallow snapshots on /bar using webhdfs
   dfs.allowSnapshot(bar);
+  // check if snapshot status is enabled, again
+  assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
+  assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
   snapshottableDirs = dfs.getSnapshottableDirListing();
   assertEquals(1, snapshottableDirs.length);
   assertEquals(bar, snapshottableDirs[0].getFullPath());
   webHdfs.disallowSnapshot(bar);
+  // check if snapshot status is disabled, again
+  assertFalse(dfs.getFileStatus(bar).isSnapshotEnabled());
+  assertFalse(webHdfs.getFileStatus(bar).isSnapshotEnabled());
   snapshottableDirs = dfs.getSnapshottableDirListing();
   assertNull(snapshottableDirs);
   try {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[05/47] hadoop git commit: HDDS-227. Use Grpc as the default transport protocol for Standalone pipeline. Contributed by chencan.

2018-08-31 Thread xkrogen
HDDS-227. Use Grpc as the default transport protocol for Standalone pipeline. 
Contributed by chencan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a813fd02
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a813fd02
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a813fd02

Branch: refs/heads/HDFS-12943
Commit: a813fd02158c89c9c443cfe03ec5cdd8ad262d1f
Parents: 744ce20
Author: Márton Elek 
Authored: Mon Aug 27 16:07:55 2018 +0200
Committer: Márton Elek 
Committed: Mon Aug 27 16:07:55 2018 +0200

--
 .../hadoop/hdds/scm/XceiverClientManager.java   |  6 +--
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |  4 --
 .../common/src/main/resources/ozone-default.xml |  9 
 .../container/ozoneimpl/OzoneContainer.java | 11 +
 .../container/ozoneimpl/TestOzoneContainer.java | 51 
 .../ozone/scm/TestXceiverClientManager.java | 25 --
 .../hadoop/ozone/web/client/TestKeys.java   | 16 --
 7 files changed, 13 insertions(+), 109 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a813fd02/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
index 8919797..125e5d5 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
@@ -60,7 +60,6 @@ public class XceiverClientManager implements Closeable {
   private final Configuration conf;
   private final Cache clientCache;
   private final boolean useRatis;
-  private final boolean useGrpc;
 
   private static XceiverClientMetrics metrics;
   /**
@@ -78,8 +77,6 @@ public class XceiverClientManager implements Closeable {
 this.useRatis = conf.getBoolean(
 ScmConfigKeys.DFS_CONTAINER_RATIS_ENABLED_KEY,
 ScmConfigKeys.DFS_CONTAINER_RATIS_ENABLED_DEFAULT);
-this.useGrpc = 
conf.getBoolean(ScmConfigKeys.DFS_CONTAINER_GRPC_ENABLED_KEY,
-ScmConfigKeys.DFS_CONTAINER_GRPC_ENABLED_DEFAULT);
 this.conf = conf;
 this.clientCache = CacheBuilder.newBuilder()
 .expireAfterAccess(staleThresholdMs, TimeUnit.MILLISECONDS)
@@ -153,8 +150,7 @@ public class XceiverClientManager implements Closeable {
   client = XceiverClientRatis.newXceiverClientRatis(pipeline, 
conf);
   break;
 case STAND_ALONE:
-  client = useGrpc ? new XceiverClientGrpc(pipeline, conf) :
-  new XceiverClient(pipeline, conf);
+  client = new XceiverClientGrpc(pipeline, conf);
   break;
 case CHAINED:
 default:

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a813fd02/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index 2834883..4c9a3bf 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -49,10 +49,6 @@ public final class ScmConfigKeys {
   = "dfs.container.ratis.enabled";
   public static final boolean DFS_CONTAINER_RATIS_ENABLED_DEFAULT
   = false;
-  public static final String DFS_CONTAINER_GRPC_ENABLED_KEY
-  = "dfs.container.grpc.enabled";
-  public static final boolean DFS_CONTAINER_GRPC_ENABLED_DEFAULT
-  = false;
   public static final String DFS_CONTAINER_RATIS_RPC_TYPE_KEY
   = "dfs.container.ratis.rpc.type";
   public static final String DFS_CONTAINER_RATIS_RPC_TYPE_DEFAULT

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a813fd02/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 37a845e..f2544d9 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -95,15 +95,6 @@
 
   
   
-dfs.container.grpc.enabled
-false
-OZONE, MANAGEMENT, PIPELINE, RATIS
-Ozone supports different kinds of replication pipelines
-  protocols. grpc is one of the replication pipeline protocol supported by
-  ozone.
-
-  
-  
 

[25/47] hadoop git commit: HDFS-13837. Enable debug log for LeaseRenewer in TestDistributedFileSystem. Contributed by Shweta.

2018-08-31 Thread xkrogen
HDFS-13837. Enable debug log for LeaseRenewer in TestDistributedFileSystem. 
Contributed by Shweta.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/33f42efc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/33f42efc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/33f42efc

Branch: refs/heads/HDFS-12943
Commit: 33f42efc947445b7755da6aad34b5e26b96ad663
Parents: ac515d2
Author: Shweta 
Authored: Tue Aug 28 13:51:04 2018 -0700
Committer: Xiao Chen 
Committed: Tue Aug 28 13:56:32 2018 -0700

--
 .../java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java  | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/33f42efc/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
index 46323dd..cae0fbf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
@@ -100,12 +100,12 @@ import org.apache.hadoop.test.Whitebox;
 import org.apache.hadoop.util.DataChecksum;
 import org.apache.hadoop.util.Time;
 import org.apache.hadoop.util.concurrent.HadoopExecutors;
-import org.apache.log4j.Level;
 import org.junit.Assert;
 import org.junit.Test;
 import org.mockito.InOrder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import org.slf4j.event.Level;
 
 public class TestDistributedFileSystem {
   private static final Random RAN = new Random();
@@ -113,7 +113,8 @@ public class TestDistributedFileSystem {
   TestDistributedFileSystem.class);
 
   static {
-GenericTestUtils.setLogLevel(DFSClient.LOG, Level.ALL);
+GenericTestUtils.setLogLevel(DFSClient.LOG, Level.TRACE);
+GenericTestUtils.setLogLevel(LeaseRenewer.LOG, Level.DEBUG);
   }
 
   private boolean dualPortTesting = false;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-7865. Node attributes documentation. Contributed by Naganarasimha G R.

2018-08-31 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/YARN-3409 23433323e -> 77bad2002


YARN-7865. Node attributes documentation. Contributed by Naganarasimha G R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/77bad200
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/77bad200
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/77bad200

Branch: refs/heads/YARN-3409
Commit: 77bad20027936db7f1651cc35323172d6f7cc30e
Parents: 2343332
Author: Weiwei Yang 
Authored: Fri Aug 31 17:52:26 2018 +0800
Committer: Weiwei Yang 
Committed: Fri Aug 31 17:52:26 2018 +0800

--
 hadoop-project/src/site/site.xml|   1 +
 .../src/site/markdown/NodeAttributes.md | 156 +++
 2 files changed, 157 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/77bad200/hadoop-project/src/site/site.xml
--
diff --git a/hadoop-project/src/site/site.xml b/hadoop-project/src/site/site.xml
index 40df7c5..b40dbfc 100644
--- a/hadoop-project/src/site/site.xml
+++ b/hadoop-project/src/site/site.xml
@@ -142,6 +142,7 @@
   
   
   
+  
   
   
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/77bad200/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeAttributes.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeAttributes.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeAttributes.md
new file mode 100644
index 000..5128004
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeAttributes.md
@@ -0,0 +1,156 @@
+
+
+YARN Node Attributes
+===
+
+
+
+Overview
+
+
+Node Attribute is a way to describe the attributes of a Node without resource 
guarantees. This could be used by applications to pick up the right nodes for 
their container to be placed based on expression of multitude of these 
attributes.
+
+Features
+
+
+The salient features of ```Node Attributes``` is as follows:
+
+* A Node can be associated with multiple attributes.
+* Value can be associated with a attribute tagged to a node. String type 
values are only supported currently.
+* Unlike Node Labels, Node Attributes need not be specified explicitly at the 
cluster level, but there are API's to list the attributes available at the 
cluster level.
+* As its non tangible resource, its not associated with any queue and thus 
queue resource planning and authorisation is not required for attributes.
+* Similar to the allocation tags, Applications will be able to request 
containers using expressions containing one or more of these attributes using 
*Placement Constraints*.
+* Equals (=) and Not Equals (!=) are the only supported operators in the 
expression. AND & OR can also be used as part of attribute expression.
+* Node attribute constraints are hard limits, that says the allocation can 
only be made if the node satisfies the node attribute constraint. In another 
word, the request keeps pending until it finds a valid node satisfying the 
constraint. There is no relax policy at present.
+* Operability
+* Node Attributes and its mapping to nodes can be recovered across RM 
restart
+* Update node attributes - admin can add, remove and replace attributes on 
nodes when RM is running
+* Mapping of NM to node attributes can be done in two ways,
+* **Centralised :** Node to attributes mapping can be done through RM 
exposed CLI or RPC (REST is yet to be supported).
+* **Distributed :** Node to attributes mapping will be set by a configured 
Node Attributes Provider in NM. We have two different providers in YARN: 
*Script* based provider and *Configuration* based provider. In case of script, 
NM can be configured with a script path and the script can emit the 
attribute(s) of the node. In case of config, node Attributes can be directly 
configured in the NM's yarn-site.xml. In both of these options dynamic refresh 
of the attribute mapping is supported.
+
+* Unlike labels, attributes can be mapped to a node from both Centralised and 
Distributed modes at the same time. There will be no clashes as attributes are 
identified with different prefix in different modes. In case of **Centralized** 
attributes are identified by prefix *"rm.yarn.io"* and in case of 
**Distributed** attributes are identified by prefix *"nm.yarn.io"*. This 
implies attributes are uniquely identified by *prefix* and *name*.
+
+Configuration
+-
+
+###Setting up ResourceManager for Node Attributes
+
+Unlike Node Labels, Node Attributes need not be explicitly enabled as it will 
always exist and would have no