hadoop git commit: YARN-7488. Make ServiceClient.getAppId method public to return ApplicationId for a service name. Contributed by Gour Saha

2017-11-13 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8b1257416 -> 4f40cd314


YARN-7488. Make ServiceClient.getAppId method public to return ApplicationId 
for a service name. Contributed by Gour Saha


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4f40cd31
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4f40cd31
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4f40cd31

Branch: refs/heads/trunk
Commit: 4f40cd314ab14f735a465fb9dff2dc1bf118e703
Parents: 8b12574
Author: Jian He 
Authored: Mon Nov 13 18:55:12 2017 -0800
Committer: Jian He 
Committed: Mon Nov 13 18:57:56 2017 -0800

--
 .../java/org/apache/hadoop/yarn/service/client/ServiceClient.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f40cd31/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
index 11cd30d..af43f8a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
@@ -943,7 +943,7 @@ public class ServiceClient extends AppAdminClient 
implements SliderExitCodes,
 UserGroupInformation.getCurrentUser(), rpc, address);
   }
 
-  private synchronized ApplicationId getAppId(String serviceName)
+  public synchronized ApplicationId getAppId(String serviceName)
   throws IOException, YarnException {
 if (cachedAppIds.containsKey(serviceName)) {
   return cachedAppIds.get(serviceName);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15037. Add site release notes for OrgQueue and resource types.

2017-11-13 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 ef212b855 -> 3f96ecf5c


HADOOP-15037. Add site release notes for OrgQueue and resource types.

(cherry picked from commit 8b125741659a825c71877bd1b1cb8f7e3ef26436)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3f96ecf5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3f96ecf5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3f96ecf5

Branch: refs/heads/branch-3.0
Commit: 3f96ecf5c3d38da181078b5704471d5b36467be7
Parents: ef212b8
Author: Andrew Wang 
Authored: Mon Nov 13 18:49:22 2017 -0800
Committer: Andrew Wang 
Committed: Mon Nov 13 18:49:50 2017 -0800

--
 hadoop-project/src/site/markdown/index.md.vm | 20 +++-
 1 file changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f96ecf5/hadoop-project/src/site/markdown/index.md.vm
--
diff --git a/hadoop-project/src/site/markdown/index.md.vm 
b/hadoop-project/src/site/markdown/index.md.vm
index 8e1e06f..9b2d9de 100644
--- a/hadoop-project/src/site/markdown/index.md.vm
+++ b/hadoop-project/src/site/markdown/index.md.vm
@@ -225,9 +225,27 @@ cluster for existing HDFS clients.
 
 See [HDFS-10467](https://issues.apache.org/jira/browse/HADOOP-10467) and the
 HDFS Router-based Federation
-[documentation](./hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.md) for
+[documentation](./hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html) 
for
 more details.
 
+API-based configuration of Capacity Scheduler queue configuration
+--
+
+The OrgQueue extension to the capacity scheduler provides a programmatic way to
+change configurations by providing a REST API that users can call to modify
+queue configurations. This enables automation of queue configuration management
+by administrators in the queue's `administer_queue` ACL.
+
+See [YARN-5734](https://issues.apache.org/jira/browse/YARN-5734) and the
+[Capacity Scheduler 
documentation](./hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html) for more 
information.
+
+YARN Resource Types
+---
+
+The YARN resource model has been generalized to support user-defined countable 
resource types beyond CPU and memory. For instance, the cluster administrator 
could define resources like GPUs, software licenses, or locally-attached 
storage. YARN tasks can then be scheduled based on the availability of these 
resources.
+
+See [YARN-3926](https://issues.apache.org/jira/browse/YARN-3926) and the [YARN 
resource model 
documentation](./hadoop-yarn/hadoop-yarn-site/ResourceModel.html) for more 
information.
+
 Getting Started
 ===
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15037. Add site release notes for OrgQueue and resource types.

2017-11-13 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5323b0048 -> 8b1257416


HADOOP-15037. Add site release notes for OrgQueue and resource types.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8b125741
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8b125741
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8b125741

Branch: refs/heads/trunk
Commit: 8b125741659a825c71877bd1b1cb8f7e3ef26436
Parents: 5323b00
Author: Andrew Wang 
Authored: Mon Nov 13 18:49:22 2017 -0800
Committer: Andrew Wang 
Committed: Mon Nov 13 18:49:22 2017 -0800

--
 hadoop-project/src/site/markdown/index.md.vm | 20 +++-
 1 file changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8b125741/hadoop-project/src/site/markdown/index.md.vm
--
diff --git a/hadoop-project/src/site/markdown/index.md.vm 
b/hadoop-project/src/site/markdown/index.md.vm
index 8e1e06f..9b2d9de 100644
--- a/hadoop-project/src/site/markdown/index.md.vm
+++ b/hadoop-project/src/site/markdown/index.md.vm
@@ -225,9 +225,27 @@ cluster for existing HDFS clients.
 
 See [HDFS-10467](https://issues.apache.org/jira/browse/HADOOP-10467) and the
 HDFS Router-based Federation
-[documentation](./hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.md) for
+[documentation](./hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html) 
for
 more details.
 
+API-based configuration of Capacity Scheduler queue configuration
+--
+
+The OrgQueue extension to the capacity scheduler provides a programmatic way to
+change configurations by providing a REST API that users can call to modify
+queue configurations. This enables automation of queue configuration management
+by administrators in the queue's `administer_queue` ACL.
+
+See [YARN-5734](https://issues.apache.org/jira/browse/YARN-5734) and the
+[Capacity Scheduler 
documentation](./hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html) for more 
information.
+
+YARN Resource Types
+---
+
+The YARN resource model has been generalized to support user-defined countable 
resource types beyond CPU and memory. For instance, the cluster administrator 
could define resources like GPUs, software licenses, or locally-attached 
storage. YARN tasks can then be scheduled based on the availability of these 
resources.
+
+See [YARN-3926](https://issues.apache.org/jira/browse/YARN-3926) and the [YARN 
resource model 
documentation](./hadoop-yarn/hadoop-yarn-site/ResourceModel.html) for more 
information.
+
 Getting Started
 ===
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-6078. Containers stuck in Localizing state. Contributed by Billie Rinaldi.

2017-11-13 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 f894eefec -> a72dcb9ca


YARN-6078. Containers stuck in Localizing state. Contributed by Billie Rinaldi.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a72dcb9c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a72dcb9c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a72dcb9c

Branch: refs/heads/branch-2
Commit: a72dcb9cad7df7f7236092f5b86446c2ef4ea874
Parents: f894eef
Author: Junping Du 
Authored: Mon Nov 13 18:22:30 2017 -0800
Committer: Junping Du 
Committed: Mon Nov 13 18:22:30 2017 -0800

--
 .../localizer/ResourceLocalizationService.java  |  40 ++
 .../TestResourceLocalizationService.java| 144 +++
 2 files changed, 184 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a72dcb9c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
index 29fc747..28a27a7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
@@ -74,6 +74,7 @@ import org.apache.hadoop.service.CompositeService;
 import org.apache.hadoop.util.DiskChecker;
 import org.apache.hadoop.util.DiskValidator;
 import org.apache.hadoop.util.DiskValidatorFactory;
+import org.apache.hadoop.util.Shell;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.concurrent.HadoopExecutors;
 import org.apache.hadoop.util.concurrent.HadoopScheduledThreadPoolExecutor;
@@ -808,6 +809,7 @@ public class ResourceLocalizationService extends 
CompositeService
   return; // ignore; already gone
 }
 privLocalizers.remove(locId);
+LOG.info("Interrupting localizer for " + locId);
 localizer.interrupt();
   }
 }
@@ -1189,6 +1191,44 @@ public class ResourceLocalizationService extends 
CompositeService
 }
 
 @Override
+public void interrupt() {
+  boolean destroyedShell = false;
+  try {
+for (Shell shell : Shell.getAllShells()) {
+  try {
+if (shell.getWaitingThread() != null &&
+shell.getWaitingThread().equals(this) &&
+shell.getProcess() != null &&
+processIsAlive(shell.getProcess())) {
+  LOG.info("Destroying localization shell process for " +
+  localizerId);
+  shell.getProcess().destroy();
+  destroyedShell = true;
+  break;
+}
+  } catch (Exception e) {
+LOG.warn("Failed to destroy localization shell process for " +
+localizerId, e);
+  }
+}
+  } finally {
+if (!destroyedShell) {
+  super.interrupt();
+}
+  }
+}
+
+private boolean processIsAlive(Process p) {
+  try {
+p.exitValue();
+return false;
+  } catch (IllegalThreadStateException e) {
+// this means the process is still alive
+  }
+  return true;
+}
+
+@Override
 @SuppressWarnings("unchecked") // dispatcher not typed
 public void run() {
   Path nmPrivateCTokensPath = null;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a72dcb9c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
 

hadoop git commit: Set version to 2.8.3 to start release work.

2017-11-13 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8.3 [created] 9e6ac0509


Set version to 2.8.3 to start release work.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9e6ac050
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9e6ac050
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9e6ac050

Branch: refs/heads/branch-2.8.3
Commit: 9e6ac0509f4af159a4e23cb3350ec74b82c9a783
Parents: b3df744
Author: Junping Du 
Authored: Mon Nov 13 16:52:10 2017 -0800
Committer: Junping Du 
Committed: Mon Nov 13 16:52:10 2017 -0800

--
 hadoop-assemblies/pom.xml| 4 ++--
 hadoop-build-tools/pom.xml   | 2 +-
 hadoop-client/pom.xml| 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml | 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml   | 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml| 4 ++--
 hadoop-common-project/hadoop-common/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml | 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml | 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml | 4 ++--
 hadoop-common-project/pom.xml| 4 ++--
 hadoop-dist/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml| 4 ++--
 hadoop-hdfs-project/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client-common/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml | 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client-jobclient/pom.xml| 4 ++--
 .../hadoop-mapreduce-client-shuffle/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml   | 4 ++--
 hadoop-mapreduce-project/pom.xml | 4 ++--
 hadoop-maven-plugins/pom.xml | 2 +-
 hadoop-minicluster/pom.xml   | 4 ++--
 hadoop-project-dist/pom.xml  | 4 ++--
 hadoop-project/pom.xml   | 4 ++--
 hadoop-tools/hadoop-ant/pom.xml  | 4 ++--
 hadoop-tools/hadoop-archive-logs/pom.xml | 4 ++--
 hadoop-tools/hadoop-archives/pom.xml | 4 ++--
 hadoop-tools/hadoop-aws/pom.xml  | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml   | 2 +-
 hadoop-tools/hadoop-azure/pom.xml| 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml | 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml   | 4 ++--
 hadoop-tools/hadoop-extras/pom.xml   | 4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml  | 4 ++--
 hadoop-tools/hadoop-openstack/pom.xml| 4 ++--
 hadoop-tools/hadoop-pipes/pom.xml| 4 ++--
 hadoop-tools/hadoop-rumen/pom.xml| 4 ++--
 hadoop-tools/hadoop-sls/pom.xml  | 4 ++--
 hadoop-tools/hadoop-streaming/pom.xml| 4 ++--
 hadoop-tools/hadoop-tools-dist/pom.xml   | 4 ++--
 hadoop-tools/pom.xml | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml  | 4 ++--
 .../hadoop-yarn-applications-distributedshell/pom.xml| 4 ++--
 .../hadoop-yarn-applications-unmanaged-am-launcher/pom.xml   | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/pom.xml | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml   | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml   | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml | 4 

hadoop git commit: Preparing for 2.8.4 development.

2017-11-13 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 b3df744fb -> d9b006ff3


Preparing for 2.8.4 development.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d9b006ff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d9b006ff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d9b006ff

Branch: refs/heads/branch-2.8
Commit: d9b006ff34fb109de449c27cf8a8f645ec95b827
Parents: b3df744
Author: Junping Du 
Authored: Mon Nov 13 16:49:15 2017 -0800
Committer: Junping Du 
Committed: Mon Nov 13 16:49:15 2017 -0800

--
 hadoop-assemblies/pom.xml| 4 ++--
 hadoop-build-tools/pom.xml   | 2 +-
 hadoop-client/pom.xml| 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml | 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml   | 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml| 4 ++--
 hadoop-common-project/hadoop-common/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml | 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml | 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml | 4 ++--
 hadoop-common-project/pom.xml| 4 ++--
 hadoop-dist/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml| 4 ++--
 hadoop-hdfs-project/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client-common/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml | 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client-jobclient/pom.xml| 4 ++--
 .../hadoop-mapreduce-client-shuffle/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml   | 4 ++--
 hadoop-mapreduce-project/pom.xml | 4 ++--
 hadoop-maven-plugins/pom.xml | 2 +-
 hadoop-minicluster/pom.xml   | 4 ++--
 hadoop-project-dist/pom.xml  | 4 ++--
 hadoop-project/pom.xml   | 4 ++--
 hadoop-tools/hadoop-ant/pom.xml  | 4 ++--
 hadoop-tools/hadoop-archive-logs/pom.xml | 4 ++--
 hadoop-tools/hadoop-archives/pom.xml | 4 ++--
 hadoop-tools/hadoop-aws/pom.xml  | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml   | 2 +-
 hadoop-tools/hadoop-azure/pom.xml| 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml | 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml   | 4 ++--
 hadoop-tools/hadoop-extras/pom.xml   | 4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml  | 4 ++--
 hadoop-tools/hadoop-openstack/pom.xml| 4 ++--
 hadoop-tools/hadoop-pipes/pom.xml| 4 ++--
 hadoop-tools/hadoop-rumen/pom.xml| 4 ++--
 hadoop-tools/hadoop-sls/pom.xml  | 4 ++--
 hadoop-tools/hadoop-streaming/pom.xml| 4 ++--
 hadoop-tools/hadoop-tools-dist/pom.xml   | 4 ++--
 hadoop-tools/pom.xml | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml  | 4 ++--
 .../hadoop-yarn-applications-distributedshell/pom.xml| 4 ++--
 .../hadoop-yarn-applications-unmanaged-am-launcher/pom.xml   | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/pom.xml | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml   | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml   | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml | 4 ++--
 

hadoop git commit: YARN-7411. Inter-Queue preemption's computeFixpointAllocation need to handle absolute resources while computing normalizedGuarantee. (Sunil G via wangda)

2017-11-13 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/YARN-5881 6df85aaa2 -> 39f43eb13


YARN-7411. Inter-Queue preemption's computeFixpointAllocation need to handle 
absolute resources while computing normalizedGuarantee. (Sunil G via wangda)

Change-Id: I41b1d7558c20fc4eb2050d40134175a2ef6330cb


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/39f43eb1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/39f43eb1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/39f43eb1

Branch: refs/heads/YARN-5881
Commit: 39f43eb132cff441539ea473e51b8ed6018b7897
Parents: 6df85aa
Author: Wangda Tan 
Authored: Mon Nov 13 16:26:27 2017 -0800
Committer: Wangda Tan 
Committed: Mon Nov 13 16:26:27 2017 -0800

--
 .../api/records/impl/pb/ResourcePBImpl.java | 12 
 .../resource/DefaultResourceCalculator.java |  8 +++
 .../resource/DominantResourceCalculator.java| 21 ++
 .../yarn/util/resource/ResourceCalculator.java  | 14 +++-
 .../hadoop/yarn/util/resource/Resources.java|  5 ++
 .../AbstractPreemptableResourceCalculator.java  | 24 ++-
 .../monitor/capacity/TempQueuePerPartition.java | 12 ++--
 ...alCapacityPreemptionPolicyMockFramework.java | 14 
 ...pacityPreemptionPolicyForNodePartitions.java | 76 
 9 files changed, 166 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/39f43eb1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.java
index 401e0c0..4f90133 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.java
@@ -26,7 +26,6 @@ import 
org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.api.records.ResourceInformation;
 import org.apache.hadoop.yarn.exceptions.ResourceNotFoundException;
-import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.proto.YarnProtos.ResourceProto;
 import org.apache.hadoop.yarn.proto.YarnProtos.ResourceProtoOrBuilder;
 import org.apache.hadoop.yarn.proto.YarnProtos.ResourceInformationProto;
@@ -152,17 +151,6 @@ public class ResourcePBImpl extends Resource {
 .newInstance(ResourceInformation.VCORES);
 this.setMemorySize(p.getMemory());
 this.setVirtualCores(p.getVirtualCores());
-
-// Update missing resource information on respective index.
-updateResourceInformationMap(types);
-  }
-
-  private void updateResourceInformationMap(ResourceInformation[] types) {
-for (int i = 0; i < types.length; i++) {
-  if (resources[i] == null) {
-resources[i] = ResourceInformation.newInstance(types[i]);
-  }
-}
   }
 
   private static ResourceInformation newDefaultInformation(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/39f43eb1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
index aefa85c..6375c4a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
@@ -112,6 +112,14 @@ public class DefaultResourceCalculator extends 
ResourceCalculator {
   }
 
   @Override
+  public Resource multiplyAndNormalizeUp(Resource r, double[] by,
+  Resource stepFactor) {
+return Resources.createResource(
+roundUp((long) (r.getMemorySize() * by[0] + 0.5),
+stepFactor.getMemorySize()));
+  }
+
+  @Override
   public Resource multiplyAndNormalizeDown(Resource r, double by,
   Resource stepFactor) {
 return Resources.createResource(


[hadoop] Git Push Summary

2017-11-13 Thread asuresh
Repository: hadoop
Updated Tags:  refs/tags/release-2.9.0-RC3 [created] 9d7e054db

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-7466. ResourceRequest has a different default for allocationRequestId than Container. Contributed by Chandni Singh

2017-11-13 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk e14f03dfb -> 5323b0048


YARN-7466. ResourceRequest has a different default for allocationRequestId than 
Container. Contributed by Chandni Singh


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5323b004
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5323b004
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5323b004

Branch: refs/heads/trunk
Commit: 5323b0048b743771276ba860b10c27b23a70bf9e
Parents: e14f03d
Author: Jian He 
Authored: Mon Nov 13 15:37:39 2017 -0800
Committer: Jian He 
Committed: Mon Nov 13 15:37:39 2017 -0800

--
 .../hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5323b004/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
index e69c07b..7769c48 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
@@ -388,7 +388,7 @@ message ResourceRequestProto {
   optional bool relax_locality = 5 [default = true];
   optional string node_label_expression = 6;
   optional ExecutionTypeRequestProto execution_type_request = 7;
-  optional int64 allocation_request_id = 8 [default = 0];
+  optional int64 allocation_request_id = 8 [default = -1];
   optional ProfileCapabilityProto profile = 9;
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[45/50] [abbrv] hadoop git commit: HDFS-12705. WebHdfsFileSystem exceptions should retain the caused by exception. Contributed by Hanisha Koneru.

2017-11-13 Thread kkaranasos
HDFS-12705. WebHdfsFileSystem exceptions should retain the caused by exception. 
Contributed by Hanisha Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4908a897
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4908a897
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4908a897

Branch: refs/heads/YARN-6592
Commit: 4908a8970eaf500642a9d8427e322032c1ec047a
Parents: 040a38d
Author: Arpit Agarwal 
Authored: Mon Nov 13 11:30:39 2017 -0800
Committer: Arpit Agarwal 
Committed: Mon Nov 13 11:30:39 2017 -0800

--
 .../hadoop/hdfs/web/WebHdfsFileSystem.java  |  1 +
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java | 59 
 2 files changed, 60 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4908a897/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 34f5d6e..c1aef49 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -780,6 +780,7 @@ public class WebHdfsFileSystem extends FileSystem
   try {
 IOException newIoe = ioe.getClass().getConstructor(String.class)
 .newInstance(node + ": " + ioe.getMessage());
+newIoe.initCause(ioe.getCause());
 newIoe.setStackTrace(ioe.getStackTrace());
 ioe = newIoe;
   } catch (NoSuchMethodException | SecurityException 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4908a897/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
index 3ee8ad0..500ec0a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
@@ -1452,4 +1452,63 @@ public class TestWebHDFS {
   }
 }
   }
+
+  /**
+   * Tests that {@link WebHdfsFileSystem.AbstractRunner} propagates original
+   * exception's stacktrace and cause during runWithRetry attempts.
+   * @throws Exception
+   */
+  @Test
+  public void testExceptionPropogationInAbstractRunner() throws Exception{
+final Configuration conf = WebHdfsTestUtil.createConf();
+final Path dir = new Path("/testExceptionPropogationInAbstractRunner");
+
+conf.setBoolean(HdfsClientConfigKeys.Retry.POLICY_ENABLED_KEY, true);
+
+final short numDatanodes = 1;
+final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+.numDataNodes(numDatanodes)
+.build();
+try {
+  cluster.waitActive();
+  final FileSystem fs = WebHdfsTestUtil
+  .getWebHdfsFileSystem(conf, WebHdfsConstants.WEBHDFS_SCHEME);
+
+  //create a file
+  final long length = 1L << 20;
+  final Path file1 = new Path(dir, "testFile");
+
+  DFSTestUtil.createFile(fs, file1, length, numDatanodes, 20120406L);
+
+  //get file status and check that it was written properly.
+  final FileStatus s1 = fs.getFileStatus(file1);
+  assertEquals("Write failed for file " + file1, length, s1.getLen());
+
+  FSDataInputStream in = fs.open(file1);
+  in.read(); // Connection is made only when the first read() occurs.
+  final WebHdfsInputStream webIn =
+  (WebHdfsInputStream)(in.getWrappedStream());
+
+  final String msg = "Throwing dummy exception";
+  IOException ioe = new IOException(msg, new DummyThrowable());
+
+  WebHdfsFileSystem.ReadRunner readRunner = spy(webIn.getReadRunner());
+  doThrow(ioe).when(readRunner).getResponse(any(HttpURLConnection.class));
+
+  webIn.setReadRunner(readRunner);
+
+  try {
+webIn.read();
+fail("Read should have thrown IOException.");
+  } catch (IOException e) {
+assertTrue(e.getMessage().contains(msg));
+assertTrue(e.getCause() instanceof DummyThrowable);
+  }
+} finally {
+  cluster.shutdown();
+}
+  }
+
+  final static class DummyThrowable extends Throwable {
+  }
 }


-
To unsubscribe, 

[22/50] [abbrv] hadoop git commit: YARN-7386. Duplicate Strings in various places in Yarn memory (mi...@cloudera.com via rkanter)

2017-11-13 Thread kkaranasos
YARN-7386. Duplicate Strings in various places in Yarn memory 
(mi...@cloudera.com via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a2c150a7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a2c150a7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a2c150a7

Branch: refs/heads/YARN-6592
Commit: a2c150a7369cc629bbfaa2dfa3a8495b6f9c42e2
Parents: ba81366
Author: Robert Kanter 
Authored: Thu Nov 9 12:07:46 2017 -0800
Committer: Robert Kanter 
Committed: Thu Nov 9 12:12:52 2017 -0800

--
 .../impl/pb/ContainerLaunchContextPBImpl.java   | 16 
 .../yarn/api/records/impl/pb/ContainerPBImpl.java   |  2 +-
 .../server/resourcemanager/rmapp/RMAppImpl.java |  7 ---
 .../rmapp/attempt/RMAppAttemptImpl.java |  3 ++-
 4 files changed, 19 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2c150a7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
index d722cc5..a9f2ee3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
@@ -27,6 +27,7 @@ import java.util.Map;
 
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.util.StringInterner;
 import org.apache.hadoop.yarn.api.records.ApplicationAccessType;
 import org.apache.hadoop.yarn.api.records.ContainerLaunchContext;
 import org.apache.hadoop.yarn.api.records.ContainerRetryContext;
@@ -392,7 +393,8 @@ extends ContainerLaunchContext {
 this.environment = new HashMap();
 
 for (StringStringMapProto c : list) {
-  this.environment.put(c.getKey(), c.getValue());
+  this.environment.put(StringInterner.weakIntern(c.getKey()),
+  StringInterner.weakIntern(c.getValue()));
 }
   }
   
@@ -402,7 +404,10 @@ extends ContainerLaunchContext {
   return;
 initEnv();
 this.environment.clear();
-this.environment.putAll(env);
+for (Map.Entry e : env.entrySet()) {
+  this.environment.put(StringInterner.weakIntern(e.getKey()),
+  StringInterner.weakIntern(e.getValue()));
+}
   }
   
   private void addEnvToProto() {
@@ -464,7 +469,7 @@ extends ContainerLaunchContext {
 
 for (ApplicationACLMapProto aclProto : list) {
   this.applicationACLS.put(ProtoUtils.convertFromProtoFormat(aclProto
-  .getAccessType()), aclProto.getAcl());
+  .getAccessType()), StringInterner.weakIntern(aclProto.getAcl()));
 }
   }
 
@@ -513,7 +518,10 @@ extends ContainerLaunchContext {
   return;
 initApplicationACLs();
 this.applicationACLS.clear();
-this.applicationACLS.putAll(appACLs);
+for (Map.Entry e : appACLs.entrySet()) {
+  this.applicationACLS.put(e.getKey(),
+  StringInterner.weakIntern(e.getValue()));
+}
   }
 
   public ContainerRetryContext getContainerRetryContext() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2c150a7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerPBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerPBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerPBImpl.java
index b6e22d1..be84938 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerPBImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerPBImpl.java
@@ -181,7 +181,7 @@ public class ContainerPBImpl extends Container {
   builder.clearNodeHttpAddress();
   return;
 }
-builder.setNodeHttpAddress(nodeHttpAddress);
+builder.setNodeHttpAddress(nodeHttpAddress.intern());
   }
 
   @Override


[36/50] [abbrv] hadoop git commit: YARN-7475. Fix Container log link in new YARN UI. (Sunil G via Subru).

2017-11-13 Thread kkaranasos
YARN-7475. Fix Container log link in new YARN UI. (Sunil G via Subru).

(cherry picked from commit 3c5b46c2edd69bb238d635ae61ff91656dec23df)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3e260778
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3e260778
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3e260778

Branch: refs/heads/YARN-6592
Commit: 3e26077848ed1d7461576116a9ae841d38aa3ef1
Parents: ff9f7fc
Author: Subru Krishnan 
Authored: Sun Nov 12 09:18:08 2017 -0800
Committer: Subru Krishnan 
Committed: Sun Nov 12 09:53:39 2017 -0800

--
 .../src/main/webapp/app/adapters/yarn-container-log.js   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e260778/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-container-log.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-container-log.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-container-log.js
index 8d1b12b..df46127 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-container-log.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-container-log.js
@@ -42,9 +42,9 @@ export default DS.RESTAdapter.extend({
 var nodeHttpAddr = splits[0];
 var containerId = splits[1];
 var filename = splits[2];
-this.host = this.get('host') + nodeHttpAddr;
 var url = this._buildURL();
-url = url + "/containerlogs/" + containerId + "/" + filename;
+url = url.replace("{nodeAddress}", nodeHttpAddr)  + "/containerlogs/"
+   + containerId + "/" + filename;
 return url;
   },
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[17/50] [abbrv] hadoop git commit: HADOOP-14872. CryptoInputStream should implement unbuffer. Contributed by John Zhuge.

2017-11-13 Thread kkaranasos
HADOOP-14872. CryptoInputStream should implement unbuffer. Contributed by John 
Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6c32ddad
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6c32ddad
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6c32ddad

Branch: refs/heads/YARN-6592
Commit: 6c32ddad30240a251caaefdf7fec9ff8ad177a7c
Parents: bf6a660
Author: John Zhuge 
Authored: Tue Nov 7 00:09:34 2017 -0800
Committer: John Zhuge 
Committed: Thu Nov 9 10:16:12 2017 -0800

--
 .../apache/hadoop/crypto/CryptoInputStream.java | 32 -
 .../hadoop/crypto/CryptoStreamsTestBase.java| 72 +++-
 .../apache/hadoop/crypto/TestCryptoStreams.java | 28 ++--
 .../crypto/TestCryptoStreamsForLocalFS.java |  5 ++
 .../hadoop/crypto/TestCryptoStreamsNormal.java  |  5 ++
 5 files changed, 133 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c32ddad/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
index 0be6e34..a2273bf 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
@@ -30,20 +30,23 @@ import java.util.EnumSet;
 import java.util.Queue;
 import java.util.concurrent.ConcurrentLinkedQueue;
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.ByteBufferReadable;
 import org.apache.hadoop.fs.CanSetDropBehind;
 import org.apache.hadoop.fs.CanSetReadahead;
+import org.apache.hadoop.fs.CanUnbuffer;
 import org.apache.hadoop.fs.FSExceptionMessages;
 import org.apache.hadoop.fs.HasEnhancedByteBufferAccess;
 import org.apache.hadoop.fs.HasFileDescriptor;
 import org.apache.hadoop.fs.PositionedReadable;
 import org.apache.hadoop.fs.ReadOption;
 import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.fs.StreamCapabilities;
+import org.apache.hadoop.fs.StreamCapabilitiesPolicy;
 import org.apache.hadoop.io.ByteBufferPool;
-
-import com.google.common.base.Preconditions;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * CryptoInputStream decrypts data. It is not thread-safe. AES CTR mode is
@@ -61,7 +64,7 @@ import com.google.common.base.Preconditions;
 public class CryptoInputStream extends FilterInputStream implements 
 Seekable, PositionedReadable, ByteBufferReadable, HasFileDescriptor, 
 CanSetDropBehind, CanSetReadahead, HasEnhancedByteBufferAccess, 
-ReadableByteChannel {
+ReadableByteChannel, CanUnbuffer, StreamCapabilities {
   private final byte[] oneByteBuf = new byte[1];
   private final CryptoCodec codec;
   private final Decryptor decryptor;
@@ -719,4 +722,27 @@ public class CryptoInputStream extends FilterInputStream 
implements
   public boolean isOpen() {
 return !closed;
   }
+
+  private void cleanDecryptorPool() {
+decryptorPool.clear();
+  }
+
+  @Override
+  public void unbuffer() {
+cleanBufferPool();
+cleanDecryptorPool();
+StreamCapabilitiesPolicy.unbuffer(in);
+  }
+
+  @Override
+  public boolean hasCapability(String capability) {
+switch (StringUtils.toLowerCase(capability)) {
+case StreamCapabilities.READAHEAD:
+case StreamCapabilities.DROPBEHIND:
+case StreamCapabilities.UNBUFFER:
+  return true;
+default:
+  return false;
+}
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c32ddad/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
index 9183524..259383d 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
@@ -27,6 +27,7 @@ import java.util.EnumSet;
 import java.util.Random;
 
 import org.apache.hadoop.fs.ByteBufferReadable;
+import org.apache.hadoop.fs.CanUnbuffer;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import 

[07/50] [abbrv] hadoop git commit: HDFS-12788. Reset the upload button when file upload fails. Contributed by Brahma Reddy Battula

2017-11-13 Thread kkaranasos
HDFS-12788. Reset the upload button when file upload fails. Contributed by 
Brahma Reddy Battula


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/410d0319
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/410d0319
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/410d0319

Branch: refs/heads/YARN-6592
Commit: 410d0319cf72b9b5f3807c522237f52121d98cd5
Parents: bb8a6ee
Author: Brahma Reddy Battula 
Authored: Wed Nov 8 14:41:16 2017 +0530
Committer: Brahma Reddy Battula 
Committed: Wed Nov 8 14:41:16 2017 +0530

--
 .../hadoop-hdfs/src/main/webapps/hdfs/explorer.js| 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/410d0319/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
index dae3519..ed1f832 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
@@ -440,22 +440,29 @@
   }).complete(function(data) {
 numCompleted++;
 if(numCompleted == files.length) {
-  $('#modal-upload-file').modal('hide');
-  $('#modal-upload-file-button').button('reset');
+  reset_upload_button();
   browse_directory(current_directory);
 }
   }).error(function(jqXHR, textStatus, errorThrown) {
 numCompleted++;
+reset_upload_button();
 show_err_msg("Couldn't upload the file " + file.file.name + ". "+ 
errorThrown);
   });
 }).error(function(jqXHR, textStatus, errorThrown) {
   numCompleted++;
+  reset_upload_button();
   show_err_msg("Couldn't find datanode to write file. " + errorThrown);
 });
   })();
 }
   });
 
+  //Reset the upload button
+  function reset_upload_button() {
+$('#modal-upload-file').modal('hide');
+$('#modal-upload-file-button').button('reset');
+  }
+
   //Store the list of files which have been checked into session storage
   function store_selected_files(current_directory) {
 sessionStorage.setItem("source_directory", current_directory);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[38/50] [abbrv] hadoop git commit: HADOOP-15031. Fix javadoc issues in Hadoop Common. Contributed by Mukul Kumar Singh.

2017-11-13 Thread kkaranasos
HADOOP-15031. Fix javadoc issues in Hadoop Common. Contributed by Mukul Kumar 
Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/975a57a6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/975a57a6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/975a57a6

Branch: refs/heads/YARN-6592
Commit: 975a57a6886e81e412bea35bf597beccc807a66f
Parents: fb62bd6
Author: Akira Ajisaka 
Authored: Mon Nov 13 23:11:03 2017 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 13 23:12:23 2017 +0900

--
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java| 3 +--
 .../hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java | 3 +--
 2 files changed, 2 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/975a57a6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index 64021ad..be0ec87 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -973,8 +973,7 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* @param opt If absent, assume {@link HandleOpt#path()}.
* @throws IllegalArgumentException If the FileStatus does not belong to
* this FileSystem
-   * @throws UnsupportedOperationException If
-   * {@link #createPathHandle(FileStatus, HandleOpt[])}
+   * @throws UnsupportedOperationException If {@link #createPathHandle}
* not overridden by subclass.
* @throws UnsupportedOperationException If this FileSystem cannot enforce
* the specified constraints.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/975a57a6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
index 550e6b9..e455abf 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
@@ -338,8 +338,7 @@ public final class Options {
 }
 
 /**
- * Utility function for mapping
- * {@link FileSystem#getPathHandle(FileStatus, HandleOpt[])} to a
+ * Utility function for mapping {@link FileSystem#getPathHandle} to a
  * fixed set of handle options.
  * @param fs Target filesystem
  * @param opt Options to bind in partially evaluated function


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[29/50] [abbrv] hadoop git commit: HADOOP-14960. Add GC time percentage monitor/alerter. Contributed by Misha Dmitriev.

2017-11-13 Thread kkaranasos
HADOOP-14960. Add GC time percentage monitor/alerter. Contributed by Misha 
Dmitriev.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3c6adda2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3c6adda2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3c6adda2

Branch: refs/heads/YARN-6592
Commit: 3c6adda291745c592741b87cd613214ae11887e4
Parents: 10a1f55
Author: Xiao Chen 
Authored: Thu Nov 9 21:05:34 2017 -0800
Committer: Xiao Chen 
Committed: Thu Nov 9 21:06:06 2017 -0800

--
 .../hadoop/metrics2/source/JvmMetrics.java  |  15 ++
 .../hadoop/metrics2/source/JvmMetricsInfo.java  |   3 +-
 .../org/apache/hadoop/util/GcTimeMonitor.java   | 242 +++
 .../hadoop/metrics2/source/TestJvmMetrics.java  |  92 ++-
 4 files changed, 345 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c6adda2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
index e3f8754..8c3375f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
@@ -28,6 +28,8 @@ import java.util.List;
 import java.util.concurrent.ConcurrentHashMap;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.log.metrics.EventCounter;
 import org.apache.hadoop.metrics2.MetricsCollector;
@@ -39,6 +41,8 @@ import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.metrics2.lib.Interns;
 import static org.apache.hadoop.metrics2.source.JvmMetricsInfo.*;
 import static org.apache.hadoop.metrics2.impl.MsInfo.*;
+
+import org.apache.hadoop.util.GcTimeMonitor;
 import org.apache.hadoop.util.JvmPauseMonitor;
 
 /**
@@ -85,6 +89,7 @@ public class JvmMetrics implements MetricsSource {
   private JvmPauseMonitor pauseMonitor = null;
   final ConcurrentHashMap gcInfoCache =
   new ConcurrentHashMap();
+  private GcTimeMonitor gcTimeMonitor = null;
 
   @VisibleForTesting
   JvmMetrics(String processName, String sessionId) {
@@ -96,6 +101,11 @@ public class JvmMetrics implements MetricsSource {
 this.pauseMonitor = pauseMonitor;
   }
 
+  public void setGcTimeMonitor(GcTimeMonitor gcTimeMonitor) {
+Preconditions.checkNotNull(gcTimeMonitor);
+this.gcTimeMonitor = gcTimeMonitor;
+  }
+
   public static JvmMetrics create(String processName, String sessionId,
   MetricsSystem ms) {
 return ms.register(JvmMetrics.name(), JvmMetrics.description(),
@@ -176,6 +186,11 @@ public class JvmMetrics implements MetricsSource {
   rb.addCounter(GcTotalExtraSleepTime,
   pauseMonitor.getTotalGcExtraSleepTime());
 }
+
+if (gcTimeMonitor != null) {
+  rb.addCounter(GcTimePercentage,
+  gcTimeMonitor.getLatestGcData().getGcTimePercentage());
+}
   }
 
   private MetricsInfo[] getGcInfo(String gcName) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c6adda2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
index 8da6785..74f670b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
@@ -51,7 +51,8 @@ public enum JvmMetricsInfo implements MetricsInfo {
   LogInfo("Total number of info log events"),
   GcNumWarnThresholdExceeded("Number of times that the GC warn threshold is 
exceeded"),
   GcNumInfoThresholdExceeded("Number of times that the GC info threshold is 
exceeded"),
-  GcTotalExtraSleepTime("Total GC extra sleep time in milliseconds");
+  GcTotalExtraSleepTime("Total GC extra sleep time in milliseconds"),
+  GcTimePercentage("Percentage of time the JVM was paused in GC");
 
   private final String desc;
 


[42/50] [abbrv] hadoop git commit: YARN-7447. Fixed bug in create YARN services via RM. (Contributed by Billie Rinaldi)

2017-11-13 Thread kkaranasos
YARN-7447. Fixed bug in create YARN services via RM.  (Contributed by Billie 
Rinaldi)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa4b5c66
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa4b5c66
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa4b5c66

Branch: refs/heads/YARN-6592
Commit: fa4b5c669c04d83d92bc73ad72e8311d93c3ed0d
Parents: 0d6bab9
Author: Eric Yang 
Authored: Mon Nov 13 13:59:58 2017 -0500
Committer: Eric Yang 
Committed: Mon Nov 13 13:59:58 2017 -0500

--
 hadoop-yarn-project/hadoop-yarn/bin/yarn | 8 
 1 file changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa4b5c66/hadoop-yarn-project/hadoop-yarn/bin/yarn
--
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn 
b/hadoop-yarn-project/hadoop-yarn/bin/yarn
index 00596c2..d7b44b9 100755
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn
@@ -149,6 +149,14 @@ ${HADOOP_COMMON_HOME}/${HADOOP_COMMON_LIB_JARS_DIR}"
   if [[ -n "${YARN_RESOURCEMANAGER_HEAPSIZE}" ]]; then
 HADOOP_HEAPSIZE_MAX="${YARN_RESOURCEMANAGER_HEAPSIZE}"
   fi
+  local sld="${HADOOP_YARN_HOME}/${YARN_DIR},\
+${HADOOP_YARN_HOME}/${YARN_LIB_JARS_DIR},\
+${HADOOP_HDFS_HOME}/${HDFS_DIR},\
+${HADOOP_HDFS_HOME}/${HDFS_LIB_JARS_DIR},\
+${HADOOP_COMMON_HOME}/${HADOOP_COMMON_DIR},\
+${HADOOP_COMMON_HOME}/${HADOOP_COMMON_LIB_JARS_DIR}"
+  hadoop_translate_cygwin_path sld
+  hadoop_add_param HADOOP_OPTS service.libdir "-Dservice.libdir=${sld}"
 ;;
 rmadmin)
   HADOOP_CLASSNAME='org.apache.hadoop.yarn.client.cli.RMAdminCLI'


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[21/50] [abbrv] hadoop git commit: YARN-7413. Support resource type in SLS (Contributed by Yufei Gu via Daniel Templeton)

2017-11-13 Thread kkaranasos
YARN-7413. Support resource type in SLS (Contributed by Yufei Gu via Daniel 
Templeton)

Change-Id: Ic0a897c123c5d2f57aae757ca6bcf1dad7b90d2b


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ba813661
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ba813661
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ba813661

Branch: refs/heads/YARN-6592
Commit: ba8136615ab66c450884614557eddc6509d63b7c
Parents: 462f6c4
Author: Daniel Templeton 
Authored: Thu Nov 9 12:09:48 2017 -0800
Committer: Daniel Templeton 
Committed: Thu Nov 9 12:09:48 2017 -0800

--
 .../org/apache/hadoop/yarn/sls/SLSRunner.java   | 86 +---
 .../hadoop/yarn/sls/conf/SLSConfiguration.java  |  6 +-
 .../yarn/sls/nodemanager/NMSimulator.java   | 15 ++--
 .../src/site/markdown/SchedulerLoadSimulator.md | 48 +++
 .../yarn/sls/nodemanager/TestNMSimulator.java   |  3 +-
 5 files changed, 117 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba813661/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
index 9d6c3aa..ad4310f 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
@@ -62,6 +62,7 @@ import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.NodeState;
 import org.apache.hadoop.yarn.api.records.ReservationId;
 import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.ResourceInformation;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
@@ -84,6 +85,7 @@ import org.apache.hadoop.yarn.sls.synthetic.SynthJob;
 import org.apache.hadoop.yarn.sls.synthetic.SynthTraceJobProducer;
 import org.apache.hadoop.yarn.sls.utils.SLSUtils;
 import org.apache.hadoop.yarn.util.UTCClock;
+import org.apache.hadoop.yarn.util.resource.ResourceUtils;
 import org.apache.hadoop.yarn.util.resource.Resources;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -99,7 +101,7 @@ public class SLSRunner extends Configured implements Tool {
 
   // NM simulator
   private HashMap nmMap;
-  private int nmMemoryMB, nmVCores;
+  private Resource nodeManagerResource;
   private String nodeFile;
 
   // AM simulator
@@ -178,6 +180,30 @@ public class SLSRunner extends Configured implements Tool {
 amClassMap.put(amType, Class.forName(tempConf.get(key)));
   }
 }
+
+nodeManagerResource = getNodeManagerResource();
+  }
+
+  private Resource getNodeManagerResource() {
+Resource resource = Resources.createResource(0);
+ResourceInformation[] infors = ResourceUtils.getResourceTypesArray();
+for (ResourceInformation info : infors) {
+  long value;
+  if (info.getName().equals(ResourceInformation.MEMORY_URI)) {
+value = getConf().getInt(SLSConfiguration.NM_MEMORY_MB,
+SLSConfiguration.NM_MEMORY_MB_DEFAULT);
+  } else if (info.getName().equals(ResourceInformation.VCORES_URI)) {
+value = getConf().getInt(SLSConfiguration.NM_VCORES,
+SLSConfiguration.NM_VCORES_DEFAULT);
+  } else {
+value = getConf().getLong(SLSConfiguration.NM_PREFIX +
+info.getName(), SLSConfiguration.NM_RESOURCE_DEFAULT);
+  }
+
+  resource.setResourceValue(info.getName(), value);
+}
+
+return resource;
   }
 
   /**
@@ -261,10 +287,6 @@ public class SLSRunner extends Configured implements Tool {
 
   private void startNM() throws YarnException, IOException {
 // nm configuration
-nmMemoryMB = getConf().getInt(SLSConfiguration.NM_MEMORY_MB,
-SLSConfiguration.NM_MEMORY_MB_DEFAULT);
-nmVCores = getConf().getInt(SLSConfiguration.NM_VCORES,
-SLSConfiguration.NM_VCORES_DEFAULT);
 int heartbeatInterval =
 getConf().getInt(SLSConfiguration.NM_HEARTBEAT_INTERVAL_MS,
 SLSConfiguration.NM_HEARTBEAT_INTERVAL_MS_DEFAULT);
@@ -304,7 +326,7 @@ public class SLSRunner extends Configured implements Tool {
 for (String hostName : nodeSet) {
   // we randomize the heartbeat start time from zero to 1 interval
   NMSimulator nm = new NMSimulator();
-  nm.init(hostName, nmMemoryMB, nmVCores, 
random.nextInt(heartbeatInterval),
+  nm.init(hostName, nodeManagerResource, 

[08/50] [abbrv] hadoop git commit: MAPREDUCE-6997. Moving logging APIs over to slf4j in hadoop-mapreduce-client-hs. Contributed by Gergely Novák.

2017-11-13 Thread kkaranasos
MAPREDUCE-6997. Moving logging APIs over to slf4j in 
hadoop-mapreduce-client-hs. Contributed by Gergely Novák.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ffee10b6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ffee10b6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ffee10b6

Branch: refs/heads/YARN-6592
Commit: ffee10b68ef1f2d75c9d0df9140c2a605f826724
Parents: 410d031
Author: Akira Ajisaka 
Authored: Wed Nov 8 19:21:43 2017 +0900
Committer: Akira Ajisaka 
Committed: Wed Nov 8 19:21:43 2017 +0900

--
 .../mapreduce/v2/hs/CachedHistoryStorage.java   |  7 ---
 .../hadoop/mapreduce/v2/hs/CompletedJob.java|  6 +++---
 .../hadoop/mapreduce/v2/hs/HSAuditLogger.java   |  7 ---
 .../hadoop/mapreduce/v2/hs/HSProxies.java   |  6 +++---
 .../mapreduce/v2/hs/HistoryClientService.java   |  7 ---
 .../mapreduce/v2/hs/HistoryFileManager.java | 10 +
 ...istoryServerFileSystemStateStoreService.java | 20 +-
 .../HistoryServerLeveldbStateStoreService.java  | 22 ++--
 .../v2/hs/JHSDelegationTokenSecretManager.java  |  6 +++---
 .../hadoop/mapreduce/v2/hs/JobHistory.java  |  6 +++---
 .../mapreduce/v2/hs/JobHistoryServer.java   |  9 
 .../hadoop/mapreduce/v2/hs/PartialJob.java  |  6 +++---
 .../mapreduce/v2/hs/server/HSAdminServer.java   |  7 ---
 .../mapreduce/v2/hs/TestJobHistoryEvents.java   |  7 ---
 .../mapreduce/v2/hs/TestJobHistoryParsing.java  |  7 ---
 .../mapreduce/v2/hs/webapp/TestHSWebApp.java|  6 +++---
 16 files changed, 74 insertions(+), 65 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffee10b6/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CachedHistoryStorage.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CachedHistoryStorage.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CachedHistoryStorage.java
index c59d17f..b001ae4 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CachedHistoryStorage.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CachedHistoryStorage.java
@@ -30,8 +30,6 @@ import com.google.common.cache.CacheLoader;
 import com.google.common.cache.LoadingCache;
 import com.google.common.cache.Weigher;
 import com.google.common.util.concurrent.UncheckedExecutionException;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.mapreduce.v2.api.records.JobId;
 import org.apache.hadoop.mapreduce.v2.api.records.JobReport;
@@ -45,13 +43,16 @@ import org.apache.hadoop.service.AbstractService;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Manages an in memory cache of parsed Job History files.
  */
 public class CachedHistoryStorage extends AbstractService implements
 HistoryStorage {
-  private static final Log LOG = LogFactory.getLog(CachedHistoryStorage.class);
+  private static final Logger LOG =
+  LoggerFactory.getLogger(CachedHistoryStorage.class);
 
   private LoadingCache loadedJobCache = null;
   private int loadedJobCacheSize;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffee10b6/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
index 5afb645..a4e75f7 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
@@ -31,8 +31,6 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.locks.Lock;
 import 

[01/50] [abbrv] hadoop git commit: Merge branch 'yarn-native-services' into trunk [Forced Update!]

2017-11-13 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/YARN-6592 667e54a83 -> 26684d89a (forced update)


Merge branch 'yarn-native-services' into trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cbc632d9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cbc632d9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cbc632d9

Branch: refs/heads/YARN-6592
Commit: cbc632d9abf08c56a7fc02be51b2718af30bad28
Parents: dcd99c4 a55d073
Author: Jian He 
Authored: Mon Nov 6 14:02:19 2017 -0800
Committer: Jian He 
Committed: Mon Nov 6 14:02:19 2017 -0800

--
 LICENSE.txt |1 +
 NOTICE.txt  |   10 +
 .../resources/assemblies/hadoop-yarn-dist.xml   |   25 +
 hadoop-project/pom.xml  |   44 +-
 hadoop-project/src/site/site.xml|   10 +-
 hadoop-yarn-project/hadoop-yarn/bin/yarn|   18 +-
 .../hadoop-yarn/conf/yarn-env.sh|   18 +
 .../dev-support/findbugs-exclude.xml|   16 +-
 .../hadoop/yarn/conf/YarnConfiguration.java |2 +
 .../dev-support/findbugs-exclude.xml|   20 +
 .../hadoop-yarn-services-api/pom.xml|  130 ++
 .../hadoop/yarn/service/webapp/ApiServer.java   |  298 +++
 .../yarn/service/webapp/ApiServerWebApp.java|  161 ++
 .../definition/YARN-Services-Examples.md|  245 +++
 ...RN-Simplified-V1-API-Layer-For-Services.yaml |  471 +
 .../src/main/resources/log4j-server.properties  |   76 +
 .../src/main/resources/webapps/api-server/app   |   16 +
 .../src/main/webapp/WEB-INF/web.xml |   36 +
 .../hadoop/yarn/service/ServiceClientTest.java  |  107 ++
 .../hadoop/yarn/service/TestApiServer.java  |  366 
 .../dev-support/findbugs-exclude.xml|   48 +
 .../conf/yarnservice-log4j.properties   |   62 +
 .../examples/httpd-no-dns/httpd-no-dns.json |   62 +
 .../httpd-no-dns/httpd-proxy-no-dns.conf|   24 +
 .../examples/httpd/httpd-proxy.conf |   24 +
 .../examples/httpd/httpd.json   |   55 +
 .../examples/sleeper/sleeper.json   |   15 +
 .../hadoop-yarn-services-core/pom.xml   |  255 +++
 .../hadoop/yarn/service/ClientAMProtocol.java   |   40 +
 .../hadoop/yarn/service/ClientAMService.java|  132 ++
 .../yarn/service/ContainerFailureTracker.java   |   89 +
 .../hadoop/yarn/service/ServiceContext.java |   41 +
 .../hadoop/yarn/service/ServiceMaster.java  |  169 ++
 .../hadoop/yarn/service/ServiceMetrics.java |   94 +
 .../hadoop/yarn/service/ServiceScheduler.java   |  691 +++
 .../yarn/service/api/ServiceApiConstants.java   |   74 +
 .../yarn/service/api/records/Artifact.java  |  168 ++
 .../yarn/service/api/records/BaseResource.java  |   52 +
 .../yarn/service/api/records/Component.java |  430 +
 .../service/api/records/ComponentState.java |   30 +
 .../yarn/service/api/records/ConfigFile.java|  233 +++
 .../yarn/service/api/records/ConfigFormat.java  |   67 +
 .../yarn/service/api/records/Configuration.java |  225 +++
 .../yarn/service/api/records/Container.java |  298 +++
 .../service/api/records/ContainerState.java |   30 +
 .../hadoop/yarn/service/api/records/Error.java  |  129 ++
 .../service/api/records/PlacementPolicy.java|  102 +
 .../service/api/records/ReadinessCheck.java |  183 ++
 .../yarn/service/api/records/Resource.java  |  161 ++
 .../yarn/service/api/records/Service.java   |  390 
 .../yarn/service/api/records/ServiceState.java  |   33 +
 .../yarn/service/api/records/ServiceStatus.java |  148 ++
 .../yarn/service/client/ClientAMProxy.java  |   57 +
 .../yarn/service/client/ServiceClient.java  |  960 ++
 .../yarn/service/component/Component.java   |  584 ++
 .../yarn/service/component/ComponentEvent.java  |   83 +
 .../service/component/ComponentEventType.java   |   27 +
 .../yarn/service/component/ComponentState.java  |   25 +
 .../component/instance/ComponentInstance.java   |  549 ++
 .../instance/ComponentInstanceEvent.java|   58 +
 .../instance/ComponentInstanceEventType.java|   26 +
 .../component/instance/ComponentInstanceId.java |   91 +
 .../instance/ComponentInstanceState.java|   26 +
 .../yarn/service/conf/RestApiConstants.java |   39 +
 .../yarn/service/conf/SliderExitCodes.java  |   88 +
 .../yarn/service/conf/YarnServiceConf.java  |  113 ++
 .../yarn/service/conf/YarnServiceConstants.java |   92 +
 .../containerlaunch/AbstractLauncher.java   |  271 +++
 .../containerlaunch/ClasspathConstructor.java   |  172 ++
 .../containerlaunch/CommandLineBuilder.java |   86 +
 .../containerlaunch/ContainerLaunchService.java |  101 +
 .../containerlaunch/CredentialUtils.java|  319 
 

[47/50] [abbrv] hadoop git commit: HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)

2017-11-13 Thread kkaranasos
HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f871b754
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f871b754
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f871b754

Branch: refs/heads/YARN-6592
Commit: f871b7541a5375eb117eafb9a091e4f59401231f
Parents: b07e68b
Author: Arun Suresh 
Authored: Mon Nov 13 14:37:36 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 14:37:36 2017 -0800

--
 LICENSE.txt | 25 +
 1 file changed, 25 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f871b754/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index b0cef03..447c609 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -699,6 +699,31 @@ 
hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery
 Apache HBase - Server which contains JQuery minified javascript library 
version 1.8.3
 Microsoft JDBC Driver for SQLServer - version 6.2.1.jre7
+
+
+MIT License
+
+Copyright (c) 2003-2017 Optimatika
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+For:
 oj! Algorithms - version 43.0
 

 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[30/50] [abbrv] hadoop git commit: HDFS-12797. Add Test for NFS mount of not supported filesystems like (file:///). Contributed by Mukul Kumar Singh.

2017-11-13 Thread kkaranasos
HDFS-12797. Add Test for NFS mount of not supported filesystems like 
(file:///). Contributed by Mukul Kumar Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8a1bd9a4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8a1bd9a4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8a1bd9a4

Branch: refs/heads/YARN-6592
Commit: 8a1bd9a4f4b8864aa560094a53d43ef732d378e5
Parents: 3c6adda
Author: Jitendra Pandey 
Authored: Thu Nov 9 23:53:17 2017 -0800
Committer: Jitendra Pandey 
Committed: Thu Nov 9 23:53:17 2017 -0800

--
 .../hadoop/hdfs/nfs/nfs3/TestExportsTable.java  | 88 +++-
 1 file changed, 87 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a1bd9a4/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestExportsTable.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestExportsTable.java
 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestExportsTable.java
index 211a166..a5c3e7a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestExportsTable.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestExportsTable.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs.nfs.nfs3;
 import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
+import java.nio.file.FileSystemException;
 
 import org.apache.commons.lang3.RandomStringUtils;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
@@ -33,9 +34,14 @@ import org.apache.hadoop.hdfs.nfs.conf.NfsConfigKeys;
 import org.apache.hadoop.hdfs.nfs.conf.NfsConfiguration;
 import org.apache.hadoop.hdfs.nfs.mount.Mountd;
 import org.apache.hadoop.hdfs.nfs.mount.RpcProgramMountd;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.ExpectedException;
 
 public class TestExportsTable {
+
+  @Rule
+  public ExpectedException exception = ExpectedException.none();
  
   @Test
   public void testHdfsExportPoint() throws IOException {
@@ -70,7 +76,7 @@ public class TestExportsTable {
   }
 
   @Test
-  public void testViewFsExportPoint() throws IOException {
+  public void testViewFsMultipleExportPoint() throws IOException {
 NfsConfiguration config = new NfsConfiguration();
 MiniDFSCluster cluster = null;
 String clusterName = RandomStringUtils.randomAlphabetic(10);
@@ -183,6 +189,56 @@ public class TestExportsTable {
   }
 
   @Test
+  public void testViewFsRootExportPoint() throws IOException {
+NfsConfiguration config = new NfsConfiguration();
+MiniDFSCluster cluster = null;
+String clusterName = RandomStringUtils.randomAlphabetic(10);
+
+String exportPoint = "/";
+config.setStrings(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY, exportPoint);
+config.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY,
+FsConstants.VIEWFS_SCHEME + "://" + clusterName);
+// Use emphral port in case tests are running in parallel
+config.setInt("nfs3.mountd.port", 0);
+config.setInt("nfs3.server.port", 0);
+config.set("nfs.http.address", "0.0.0.0:0");
+
+try {
+  cluster =
+  new MiniDFSCluster.Builder(config).nnTopology(
+  MiniDFSNNTopology.simpleFederatedTopology(2))
+  .numDataNodes(2)
+  .build();
+  cluster.waitActive();
+  DistributedFileSystem hdfs1 = cluster.getFileSystem(0);
+  DistributedFileSystem hdfs2 = cluster.getFileSystem(1);
+  cluster.waitActive();
+  Path base1 = new Path("/user1");
+  Path base2 = new Path("/user2");
+  hdfs1.delete(base1, true);
+  hdfs2.delete(base2, true);
+  hdfs1.mkdirs(base1);
+  hdfs2.mkdirs(base2);
+  ConfigUtil.addLink(config, clusterName, "/hdfs1",
+  hdfs1.makeQualified(base1).toUri());
+  ConfigUtil.addLink(config, clusterName, "/hdfs2",
+  hdfs2.makeQualified(base2).toUri());
+
+  exception.expect(FileSystemException.class);
+  exception.
+  expectMessage("Only HDFS is supported as underlyingFileSystem, "
+  + "fs scheme:viewfs");
+  // Start nfs
+  final Nfs3 nfsServer = new Nfs3(config);
+  nfsServer.startServiceInternal(false);
+} finally {
+  if (cluster != null) {
+cluster.shutdown();
+  }
+}
+  }
+
+  @Test
   public void testHdfsInternalExportPoint() throws IOException {
 NfsConfiguration config = new NfsConfiguration();
 MiniDFSCluster cluster = null;
@@ -219,4 +275,34 @@ public class TestExportsTable {
   }
 }
   }
+
+  @Test
+  public void 

[43/50] [abbrv] hadoop git commit: YARN-7442. [YARN-7069] Limit format of resource type name (Contributed by Wangda Tan via Daniel Templeton)

2017-11-13 Thread kkaranasos
YARN-7442. [YARN-7069] Limit format of resource type name (Contributed by 
Wangda Tan via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2e512f01
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2e512f01
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2e512f01

Branch: refs/heads/YARN-6592
Commit: 2e512f016ed689b5afbf1e27fdcd7c9f75b6dc9c
Parents: fa4b5c6
Author: Daniel Templeton 
Authored: Mon Nov 13 10:37:30 2017 -0800
Committer: Daniel Templeton 
Committed: Mon Nov 13 11:03:30 2017 -0800

--
 .../yarn/api/records/ResourceInformation.java   |  5 +++
 .../yarn/util/resource/ResourceUtils.java   | 26 ++
 .../yarn/util/resource/TestResourceUtils.java   | 37 
 3 files changed, 68 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e512f01/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
index 59908ef..67592cc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
@@ -65,6 +65,11 @@ public class ResourceInformation implements 
Comparable {
   /**
* Set the name for the resource.
*
+   * A valid resource name must begin with a letter and contain only letters,
+   * numbers, and any of: '.', '_', or '-'. A valid resource name may also be
+   * optionally preceded by a name space followed by a slash. A valid name 
space
+   * consists of period-separated groups of letters, numbers, and dashes."
+   *
* @param rName name for the resource
*/
   public void setName(String rName) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e512f01/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
index 1170c72..3deace8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
@@ -62,6 +62,10 @@ public class ResourceUtils {
   private static final Pattern RESOURCE_REQUEST_VALUE_PATTERN =
   Pattern.compile("^([0-9]+) ?([a-zA-Z]*)$");
 
+  private static final Pattern RESOURCE_NAME_PATTERN = Pattern.compile(
+  "^(((\\p{Alnum}([\\p{Alnum}-]*\\p{Alnum})?\\.)*"
+  + "\\p{Alnum}([\\p{Alnum}-]*\\p{Alnum})?)/)?\\p{Alpha}([\\w.-]*)$");
+
   private static volatile boolean initializedResources = false;
   private static final Map RESOURCE_NAME_TO_INDEX =
   new ConcurrentHashMap();
@@ -209,6 +213,23 @@ public class ResourceUtils {
   }
 
   @VisibleForTesting
+  static void validateNameOfResourceNameAndThrowException(String resourceName)
+  throws YarnRuntimeException {
+Matcher matcher = RESOURCE_NAME_PATTERN.matcher(resourceName);
+if (!matcher.matches()) {
+  String message = String.format(
+  "'%s' is not a valid resource name. A valid resource name must"
+  + " begin with a letter and contain only letters, numbers, "
+  + "and any of: '.', '_', or '-'. A valid resource name may also"
+  + " be optionally preceded by a name space followed by a slash."
+  + " A valid name space consists of period-separated groups of"
+  + " letters, numbers, and dashes.",
+  resourceName);
+  throw new YarnRuntimeException(message);
+}
+  }
+
+  @VisibleForTesting
   static void initializeResourcesMap(Configuration conf) {
 
 Map resourceInformationMap = new HashMap<>();
@@ -246,6 +267,11 @@ public class ResourceUtils {
   }
 }
 
+// Validate names of resource information map.
+for (String name : resourceInformationMap.keySet()) {
+  validateNameOfResourceNameAndThrowException(name);
+}
+
 

[25/50] [abbrv] hadoop git commit: YARN-7465. start-yarn.sh fails to start ResourceManagers unless running as root.

2017-11-13 Thread kkaranasos
YARN-7465. start-yarn.sh fails to start ResourceManagers unless running as root.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1883a002
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1883a002
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1883a002

Branch: refs/heads/YARN-6592
Commit: 1883a0024949f7946264a4d7649b03fd1881567a
Parents: ac4d2b1
Author: Sean Mackrory 
Authored: Wed Nov 8 15:45:35 2017 -0700
Committer: Sean Mackrory 
Committed: Thu Nov 9 14:47:43 2017 -0700

--
 hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1883a002/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
--
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh 
b/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
index c009b43..b2244ec 100755
--- a/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
+++ b/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
@@ -65,7 +65,7 @@ else
   RMHOSTS="${RMHOSTS} ${rmhost}"
   done
   echo "Starting resourcemanagers on [${RMHOSTS}]"
-  hadoop_uservar_su yarn "${HADOOP_YARN_HOME}/bin/yarn" \
+  hadoop_uservar_su yarn resourcemanager "${HADOOP_YARN_HOME}/bin/yarn" \
   --config "${HADOOP_CONF_DIR}" \
   --daemon start \
   --workers \


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[04/50] [abbrv] hadoop git commit: YARN-7401. Reduce lock contention in ClusterNodeTracker#getClusterCapacity()

2017-11-13 Thread kkaranasos
YARN-7401. Reduce lock contention in ClusterNodeTracker#getClusterCapacity()


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8db9d61a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8db9d61a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8db9d61a

Branch: refs/heads/YARN-6592
Commit: 8db9d61ac2e04888cb228b29fe54b41c730cf0e6
Parents: 13fa2d4
Author: Daniel Templeton 
Authored: Tue Nov 7 14:53:48 2017 -0800
Committer: Daniel Templeton 
Committed: Tue Nov 7 14:53:48 2017 -0800

--
 .../scheduler/ClusterNodeTracker.java | 18 ++
 1 file changed, 6 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8db9d61a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ClusterNodeTracker.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ClusterNodeTracker.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ClusterNodeTracker.java
index ccec6bc..60ef390 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ClusterNodeTracker.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ClusterNodeTracker.java
@@ -55,8 +55,9 @@ public class ClusterNodeTracker {
   private Map nodeNameToNodeMap = new HashMap<>();
   private Map nodesPerRack = new HashMap<>();
 
-  private Resource clusterCapacity = Resources.clone(Resources.none());
-  private Resource staleClusterCapacity = null;
+  private final Resource clusterCapacity = Resources.clone(Resources.none());
+  private volatile Resource staleClusterCapacity =
+  Resources.clone(Resources.none());
 
   // Max allocation
   private long maxNodeMemory = -1;
@@ -82,6 +83,7 @@ public class ClusterNodeTracker {
 
   // Update cluster capacity
   Resources.addTo(clusterCapacity, node.getTotalResource());
+  staleClusterCapacity = Resources.clone(clusterCapacity);
 
   // Update maximumAllocation
   updateMaxResources(node, true);
@@ -139,16 +141,7 @@ public class ClusterNodeTracker {
   }
 
   public Resource getClusterCapacity() {
-readLock.lock();
-try {
-  if (staleClusterCapacity == null ||
-  !Resources.equals(staleClusterCapacity, clusterCapacity)) {
-staleClusterCapacity = Resources.clone(clusterCapacity);
-  }
-  return staleClusterCapacity;
-} finally {
-  readLock.unlock();
-}
+return staleClusterCapacity;
   }
 
   public N removeNode(NodeId nodeId) {
@@ -175,6 +168,7 @@ public class ClusterNodeTracker {
 
   // Update cluster capacity
   Resources.subtractFrom(clusterCapacity, node.getTotalResource());
+  staleClusterCapacity = Resources.clone(clusterCapacity);
 
   // Update maximumAllocation
   updateMaxResources(node, false);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[40/50] [abbrv] hadoop git commit: HADOOP-15008. Fixed period unit calculation for Hadoop Metrics V2. (Contribute by Erik Krogen)

2017-11-13 Thread kkaranasos
HADOOP-15008.  Fixed period unit calculation for Hadoop Metrics V2.  
(Contribute by Erik Krogen)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1b68b8ff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1b68b8ff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1b68b8ff

Branch: refs/heads/YARN-6592
Commit: 1b68b8ff2c6d4704f748d47fc0b903636f3e98c7
Parents: 975a57a
Author: Eric Yang 
Authored: Mon Nov 13 12:40:45 2017 -0500
Committer: Eric Yang 
Committed: Mon Nov 13 12:42:43 2017 -0500

--
 .../metrics2/impl/MetricsSinkAdapter.java   | 12 ++---
 .../hadoop/metrics2/impl/MetricsSystemImpl.java |  7 ++-
 .../metrics2/impl/TestMetricsSystemImpl.java| 49 
 3 files changed, 61 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b68b8ff/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
index 1199ebd..f2e607b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
@@ -51,7 +51,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
   private final Thread sinkThread;
   private volatile boolean stopping = false;
   private volatile boolean inError = false;
-  private final int period, firstRetryDelay, retryCount;
+  private final int periodMs, firstRetryDelay, retryCount;
   private final long oobPutTimeout;
   private final float retryBackoff;
   private final MetricsRegistry registry = new MetricsRegistry("sinkadapter");
@@ -62,7 +62,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
   MetricsSinkAdapter(String name, String description, MetricsSink sink,
  String context, MetricsFilter sourceFilter,
  MetricsFilter recordFilter, MetricsFilter metricFilter,
- int period, int queueCapacity, int retryDelay,
+ int periodMs, int queueCapacity, int retryDelay,
  float retryBackoff, int retryCount) {
 this.name = checkNotNull(name, "name");
 this.description = description;
@@ -71,7 +71,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
 this.sourceFilter = sourceFilter;
 this.recordFilter = recordFilter;
 this.metricFilter = metricFilter;
-this.period = checkArg(period, period > 0, "period");
+this.periodMs = checkArg(periodMs, periodMs > 0, "period");
 firstRetryDelay = checkArg(retryDelay, retryDelay > 0, "retry delay");
 this.retryBackoff = checkArg(retryBackoff, retryBackoff>1, "retry 
backoff");
 oobPutTimeout = (long)
@@ -93,9 +93,9 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
 sinkThread.setDaemon(true);
   }
 
-  boolean putMetrics(MetricsBuffer buffer, long logicalTime) {
-if (logicalTime % period == 0) {
-  LOG.debug("enqueue, logicalTime="+ logicalTime);
+  boolean putMetrics(MetricsBuffer buffer, long logicalTimeMs) {
+if (logicalTimeMs % periodMs == 0) {
+  LOG.debug("enqueue, logicalTime="+ logicalTimeMs);
   if (queue.enqueue(buffer)) {
 refreshQueueSizeGauge();
 return true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b68b8ff/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
index ee1672e..624edc9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
@@ -519,7 +519,7 @@ public class MetricsSystemImpl extends MetricsSystem 
implements MetricsSource {
 conf.getFilter(SOURCE_FILTER_KEY),
 conf.getFilter(RECORD_FILTER_KEY),
 conf.getFilter(METRIC_FILTER_KEY),
-conf.getInt(PERIOD_KEY, PERIOD_DEFAULT),
+conf.getInt(PERIOD_KEY, PERIOD_DEFAULT) * 1000,
 conf.getInt(QUEUE_CAPACITY_KEY, QUEUE_CAPACITY_DEFAULT),
 conf.getInt(RETRY_DELAY_KEY, 

[33/50] [abbrv] hadoop git commit: HADOOP-8522. ResetableGzipOutputStream creates invalid gzip files when finish() and resetState() are used. Contributed by Mike Percy

2017-11-13 Thread kkaranasos
HADOOP-8522. ResetableGzipOutputStream creates invalid gzip files when finish() 
and resetState() are used. Contributed by Mike Percy


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/796a0d3a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/796a0d3a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/796a0d3a

Branch: refs/heads/YARN-6592
Commit: 796a0d3a5c661f0c3b23af9c0db2d8f3db83c322
Parents: 6d201f7
Author: Chris Douglas 
Authored: Fri Nov 10 16:29:36 2017 -0800
Committer: Chris Douglas 
Committed: Fri Nov 10 17:41:29 2017 -0800

--
 .../apache/hadoop/io/compress/GzipCodec.java|  37 +++-
 .../hadoop/io/compress/TestGzipCodec.java   | 169 +++
 2 files changed, 201 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/796a0d3a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
index 11fcf60..9bd861d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
@@ -41,27 +41,54 @@ import org.apache.hadoop.io.compress.zlib.ZlibFactory;
 @InterfaceStability.Evolving
 public class GzipCodec extends DefaultCodec {
   /**
-   * A bridge that wraps around a DeflaterOutputStream to make it 
+   * A bridge that wraps around a DeflaterOutputStream to make it
* a CompressionOutputStream.
*/
   @InterfaceStability.Evolving
   protected static class GzipOutputStream extends CompressorStream {
 
 private static class ResetableGZIPOutputStream extends GZIPOutputStream {
+  /**
+   * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+   * details.
+   */
+  private static final byte[] GZIP_HEADER = new byte[] {
+  0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
+
+  private boolean reset = false;
 
   public ResetableGZIPOutputStream(OutputStream out) throws IOException {
 super(out);
   }
 
-  public void resetState() throws IOException {
-def.reset();
+  public synchronized void resetState() throws IOException {
+reset = true;
+  }
+
+  @Override
+  public synchronized void write(byte[] buf, int off, int len)
+  throws IOException {
+if (reset) {
+  def.reset();
+  crc.reset();
+  out.write(GZIP_HEADER);
+  reset = false;
+}
+super.write(buf, off, len);
+  }
+
+  @Override
+  public synchronized void close() throws IOException {
+reset = false;
+super.close();
   }
+
 }
 
 public GzipOutputStream(OutputStream out) throws IOException {
   super(new ResetableGZIPOutputStream(out));
 }
-
+
 /**
  * Allow children types to put a different type in here.
  * @param out the Deflater stream to use
@@ -69,7 +96,7 @@ public class GzipCodec extends DefaultCodec {
 protected GzipOutputStream(CompressorStream out) {
   super(out);
 }
-
+
 @Override
 public void close() throws IOException {
   out.close();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/796a0d3a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestGzipCodec.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestGzipCodec.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestGzipCodec.java
new file mode 100644
index 000..c8c1a47
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestGzipCodec.java
@@ -0,0 +1,169 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT 

[39/50] [abbrv] hadoop git commit: HADOOP-15008. Fixed period unit calculation for Hadoop Metrics V2.

2017-11-13 Thread kkaranasos
HADOOP-15008.  Fixed period unit calculation for Hadoop Metrics V2.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/782681c7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/782681c7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/782681c7

Branch: refs/heads/YARN-6592
Commit: 782681c73e4ae7a02206d4d26635bb1e4984fa24
Parents: 975a57a
Author: Eric Yang 
Authored: Mon Nov 13 12:40:45 2017 -0500
Committer: Eric Yang 
Committed: Mon Nov 13 12:40:45 2017 -0500

--
 .../metrics2/impl/MetricsSinkAdapter.java   | 12 ++---
 .../hadoop/metrics2/impl/MetricsSystemImpl.java |  7 ++-
 .../metrics2/impl/TestMetricsSystemImpl.java| 49 
 3 files changed, 61 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/782681c7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
index 1199ebd..f2e607b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
@@ -51,7 +51,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
   private final Thread sinkThread;
   private volatile boolean stopping = false;
   private volatile boolean inError = false;
-  private final int period, firstRetryDelay, retryCount;
+  private final int periodMs, firstRetryDelay, retryCount;
   private final long oobPutTimeout;
   private final float retryBackoff;
   private final MetricsRegistry registry = new MetricsRegistry("sinkadapter");
@@ -62,7 +62,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
   MetricsSinkAdapter(String name, String description, MetricsSink sink,
  String context, MetricsFilter sourceFilter,
  MetricsFilter recordFilter, MetricsFilter metricFilter,
- int period, int queueCapacity, int retryDelay,
+ int periodMs, int queueCapacity, int retryDelay,
  float retryBackoff, int retryCount) {
 this.name = checkNotNull(name, "name");
 this.description = description;
@@ -71,7 +71,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
 this.sourceFilter = sourceFilter;
 this.recordFilter = recordFilter;
 this.metricFilter = metricFilter;
-this.period = checkArg(period, period > 0, "period");
+this.periodMs = checkArg(periodMs, periodMs > 0, "period");
 firstRetryDelay = checkArg(retryDelay, retryDelay > 0, "retry delay");
 this.retryBackoff = checkArg(retryBackoff, retryBackoff>1, "retry 
backoff");
 oobPutTimeout = (long)
@@ -93,9 +93,9 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
 sinkThread.setDaemon(true);
   }
 
-  boolean putMetrics(MetricsBuffer buffer, long logicalTime) {
-if (logicalTime % period == 0) {
-  LOG.debug("enqueue, logicalTime="+ logicalTime);
+  boolean putMetrics(MetricsBuffer buffer, long logicalTimeMs) {
+if (logicalTimeMs % periodMs == 0) {
+  LOG.debug("enqueue, logicalTime="+ logicalTimeMs);
   if (queue.enqueue(buffer)) {
 refreshQueueSizeGauge();
 return true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/782681c7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
index ee1672e..624edc9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
@@ -519,7 +519,7 @@ public class MetricsSystemImpl extends MetricsSystem 
implements MetricsSource {
 conf.getFilter(SOURCE_FILTER_KEY),
 conf.getFilter(RECORD_FILTER_KEY),
 conf.getFilter(METRIC_FILTER_KEY),
-conf.getInt(PERIOD_KEY, PERIOD_DEFAULT),
+conf.getInt(PERIOD_KEY, PERIOD_DEFAULT) * 1000,
 conf.getInt(QUEUE_CAPACITY_KEY, QUEUE_CAPACITY_DEFAULT),
 conf.getInt(RETRY_DELAY_KEY, RETRY_DELAY_DEFAULT),
 

[24/50] [abbrv] hadoop git commit: YARN-7437. Rename PlacementSet and SchedulingPlacementSet. (Wangda Tan via kkaranasos)

2017-11-13 Thread kkaranasos
YARN-7437. Rename PlacementSet and SchedulingPlacementSet. (Wangda Tan via 
kkaranasos)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ac4d2b10
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ac4d2b10
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ac4d2b10

Branch: refs/heads/YARN-6592
Commit: ac4d2b1081d8836a21bc70e77f4e6cd2071a9949
Parents: a2c150a
Author: Konstantinos Karanasos 
Authored: Thu Nov 9 13:01:14 2017 -0800
Committer: Konstantinos Karanasos 
Committed: Thu Nov 9 13:01:24 2017 -0800

--
 .../scheduler/AppSchedulingInfo.java|  93 ++--
 .../scheduler/ContainerUpdateContext.java   |  15 +-
 .../scheduler/SchedulerApplicationAttempt.java  |  27 +-
 .../scheduler/activities/ActivitiesLogger.java  |   2 +-
 .../scheduler/capacity/AbstractCSQueue.java |   4 +-
 .../scheduler/capacity/CSQueue.java |  12 +-
 .../scheduler/capacity/CapacityScheduler.java   |  69 +--
 .../scheduler/capacity/LeafQueue.java   |  39 +-
 .../scheduler/capacity/ParentQueue.java |  37 +-
 .../allocator/AbstractContainerAllocator.java   |  10 +-
 .../capacity/allocator/ContainerAllocator.java  |  11 +-
 .../allocator/RegularContainerAllocator.java|  57 +--
 .../scheduler/common/fica/FiCaSchedulerApp.java |  23 +-
 .../scheduler/fair/FSAppAttempt.java|   2 +-
 .../placement/AppPlacementAllocator.java| 163 +++
 .../scheduler/placement/CandidateNodeSet.java   |  61 +++
 .../placement/CandidateNodeSetUtils.java|  44 ++
 .../LocalityAppPlacementAllocator.java  | 422 +++
 .../LocalitySchedulingPlacementSet.java | 416 --
 .../scheduler/placement/PlacementSet.java   |  65 ---
 .../scheduler/placement/PlacementSetUtils.java  |  36 --
 .../placement/SchedulingPlacementSet.java   | 158 ---
 .../placement/SimpleCandidateNodeSet.java   |  68 +++
 .../scheduler/placement/SimplePlacementSet.java |  70 ---
 .../scheduler/placement/package-info.java   |  28 ++
 .../capacity/TestCapacityScheduler.java |   5 +-
 .../scheduler/capacity/TestChildQueueOrder.java |  18 +-
 .../capacity/TestContainerResizing.java |   7 +-
 .../TestNodeLabelContainerAllocation.java   |   2 +-
 .../scheduler/capacity/TestParentQueue.java | 119 +++---
 30 files changed, 1082 insertions(+), 1001 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac4d2b10/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
index 082ec14..9f49880 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
@@ -46,9 +46,9 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerState;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.PendingAsk;
-import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.LocalitySchedulingPlacementSet;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.AppPlacementAllocator;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.LocalityAppPlacementAllocator;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.ResourceRequestUpdateResult;
-import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SchedulingPlacementSet;
 import org.apache.hadoop.yarn.server.scheduler.SchedulerRequestKey;
 import org.apache.hadoop.yarn.util.resource.Resources;
 /**
@@ -82,8 +82,8 @@ public class AppSchedulingInfo {
 
   private final ConcurrentSkipListSet
   schedulerKeys = new ConcurrentSkipListSet<>();
-  final Map
-  schedulerKeyToPlacementSets = new ConcurrentHashMap<>();
+  

[02/50] [abbrv] hadoop git commit: YARN-7360. TestRM.testNMTokenSentForNormalContainer() should be scheduler agnostic.

2017-11-13 Thread kkaranasos
YARN-7360. TestRM.testNMTokenSentForNormalContainer() should be scheduler 
agnostic.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8f214dc4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8f214dc4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8f214dc4

Branch: refs/heads/YARN-6592
Commit: 8f214dc4f8423250947a3f0027f70b9ab402ab62
Parents: cbc632d
Author: Haibo Chen 
Authored: Mon Nov 6 15:45:37 2017 -0800
Committer: Haibo Chen 
Committed: Mon Nov 6 15:45:37 2017 -0800

--
 .../hadoop/yarn/server/resourcemanager/TestRM.java   | 11 +--
 1 file changed, 5 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8f214dc4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java
index f912f68..3679319 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java
@@ -21,7 +21,7 @@ package org.apache.hadoop.yarn.server.resourcemanager;
 import com.google.common.base.Supplier;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.yarn.event.DrainDispatcher;
-import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedulerConfiguration;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler;
 import org.junit.Before;
 import static org.mockito.Matchers.argThat;
 import static org.mockito.Mockito.doNothing;
@@ -73,7 +73,6 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptE
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptState;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
-import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM;
 import org.apache.log4j.Level;
@@ -205,8 +204,6 @@ public class TestRM extends ParameterizedSchedulerTestBase {
   // corresponding NM Token.
   @Test (timeout = 2)
   public void testNMTokenSentForNormalContainer() throws Exception {
-conf.set(YarnConfiguration.RM_SCHEDULER,
-CapacityScheduler.class.getCanonicalName());
 MockRM rm = new MockRM(conf);
 rm.start();
 MockNM nm1 = rm.registerNode("h1:1234", 5120);
@@ -215,8 +212,10 @@ public class TestRM extends ParameterizedSchedulerTestBase 
{
 
 // Call getNewContainerId to increase container Id so that the AM container
 // Id doesn't equal to one.
-CapacityScheduler cs = (CapacityScheduler) rm.getResourceScheduler();
-cs.getApplicationAttempt(attempt.getAppAttemptId()).getNewContainerId();
+AbstractYarnScheduler scheduler = (AbstractYarnScheduler)
+rm.getResourceScheduler();
+scheduler.getApplicationAttempt(attempt.getAppAttemptId()).
+getNewContainerId();
 
 MockAM am = MockRM.launchAM(app, rm, nm1);
 // am container Id not equal to 1.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[41/50] [abbrv] hadoop git commit: Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into trunk

2017-11-13 Thread kkaranasos
Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into 
trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0d6bab94
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0d6bab94
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0d6bab94

Branch: refs/heads/YARN-6592
Commit: 0d6bab94c49dbc783912ad9903e3d76849b8122d
Parents: 1b68b8f 782681c
Author: Eric Yang 
Authored: Mon Nov 13 12:43:18 2017 -0500
Committer: Eric Yang 
Committed: Mon Nov 13 12:43:18 2017 -0500

--

--



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[11/50] [abbrv] hadoop git commit: YARN-7343. Add a junit test for ContainerScheduler recovery. (Sampada Dehankar via asuresh)

2017-11-13 Thread kkaranasos
YARN-7343. Add a junit test for ContainerScheduler recovery. (Sampada Dehankar 
via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cb35a595
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cb35a595
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cb35a595

Branch: refs/heads/YARN-6592
Commit: cb35a59589f0b18e1989543f9e4c2a33c96d5ff7
Parents: a9c70b0
Author: Arun Suresh 
Authored: Wed Nov 8 08:14:02 2017 -0800
Committer: Arun Suresh 
Committed: Wed Nov 8 08:14:02 2017 -0800

--
 .../TestContainerSchedulerRecovery.java | 314 +++
 1 file changed, 314 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb35a595/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerRecovery.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerRecovery.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerRecovery.java
new file mode 100644
index 000..2ae8b97
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerRecovery.java
@@ -0,0 +1,314 @@
+/* Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler;
+
+import static org.junit.Assert.assertEquals;
+import static org.mockito.Mockito.when;
+import static org.mockito.Mockito.spy;
+import static org.mockito.Mockito.doNothing;
+
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.ExecutionType;
+import org.apache.hadoop.yarn.event.AsyncDispatcher;
+import org.apache.hadoop.yarn.security.ContainerTokenIdentifier;
+import org.apache.hadoop.yarn.server.nodemanager.NodeManager.NMContext;
+import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl;
+import org.apache.hadoop.yarn.server.nodemanager.metrics.NodeManagerMetrics;
+import 
org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.RecoveredContainerStatus;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.InjectMocks;
+import org.mockito.Mock;
+import org.mockito.Mockito;
+import org.mockito.MockitoAnnotations;
+
+/**
+ * Tests to verify that the {@link ContainerScheduler} is able to
+ * recover active containers based on RecoveredContainerStatus and
+ * ExecutionType.
+ */
+public class TestContainerSchedulerRecovery {
+
+  @Mock private NMContext context;
+
+  @Mock private NodeManagerMetrics metrics;
+
+  @Mock private AsyncDispatcher dispatcher;
+
+  @Mock private ContainerTokenIdentifier token;
+
+  @Mock private ContainerImpl container;
+
+  @Mock private ApplicationId appId;
+
+  @Mock private ApplicationAttemptId appAttemptId;
+
+  @Mock private ContainerId containerId;
+
+  @Mock private AllocationBasedResourceUtilizationTracker
+  allocationBasedResourceUtilizationTracker;
+
+  @InjectMocks private ContainerScheduler tempContainerScheduler =
+  new ContainerScheduler(context, dispatcher, metrics, 0);
+
+  private ContainerScheduler spy;
+
+  @Before public void setUp() throws Exception {
+MockitoAnnotations.initMocks(this);
+spy = spy(tempContainerScheduler);
+when(container.getContainerId()).thenReturn(containerId);
+

[26/50] [abbrv] hadoop git commit: YARN-6909. Use LightWeightedResource when number of resource types more than two. (Sunil G via wangda)

2017-11-13 Thread kkaranasos
YARN-6909. Use LightWeightedResource when number of resource types more than 
two. (Sunil G via wangda)

Change-Id: I90e021c5dea7abd9ec6bd73b2287c8adebe14595


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dd07038f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dd07038f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dd07038f

Branch: refs/heads/YARN-6592
Commit: dd07038ffae33a5e73eb331477d43eaf3f4c2aaa
Parents: 1883a00
Author: Wangda Tan 
Authored: Thu Nov 9 14:51:15 2017 -0800
Committer: Wangda Tan 
Committed: Thu Nov 9 14:51:15 2017 -0800

--
 .../hadoop/yarn/api/records/Resource.java   | 48 ++
 .../api/records/impl/LightWeightResource.java   | 94 +---
 .../api/records/impl/pb/ResourcePBImpl.java | 88 +-
 .../scheduler/ClusterNodeTracker.java   |  2 +-
 4 files changed, 141 insertions(+), 91 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd07038f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
index be292ff..65b5dce 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
@@ -28,9 +28,9 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
 import org.apache.hadoop.classification.InterfaceStability.Stable;
 import org.apache.hadoop.yarn.api.ApplicationMasterProtocol;
+import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes;
 import org.apache.hadoop.yarn.api.records.impl.LightWeightResource;
 import org.apache.hadoop.yarn.exceptions.ResourceNotFoundException;
-import org.apache.hadoop.yarn.util.Records;
 import org.apache.hadoop.yarn.util.resource.ResourceUtils;
 
 /**
@@ -75,34 +75,27 @@ public abstract class Resource implements 
Comparable {
   @Public
   @Stable
   public static Resource newInstance(int memory, int vCores) {
-if (ResourceUtils.getNumberOfKnownResourceTypes() > 2) {
-  Resource ret = Records.newRecord(Resource.class);
-  ret.setMemorySize(memory);
-  ret.setVirtualCores(vCores);
-  return ret;
-}
 return new LightWeightResource(memory, vCores);
   }
 
   @Public
   @Stable
   public static Resource newInstance(long memory, int vCores) {
-if (ResourceUtils.getNumberOfKnownResourceTypes() > 2) {
-  Resource ret = Records.newRecord(Resource.class);
-  ret.setMemorySize(memory);
-  ret.setVirtualCores(vCores);
-  return ret;
-}
 return new LightWeightResource(memory, vCores);
   }
 
   @InterfaceAudience.Private
   @InterfaceStability.Unstable
   public static Resource newInstance(Resource resource) {
-Resource ret = Resource.newInstance(resource.getMemorySize(),
-resource.getVirtualCores());
-if (ResourceUtils.getNumberOfKnownResourceTypes() > 2) {
-  Resource.copy(resource, ret);
+Resource ret;
+int numberOfKnownResourceTypes = ResourceUtils
+.getNumberOfKnownResourceTypes();
+if (numberOfKnownResourceTypes > 2) {
+  ret = new LightWeightResource(resource.getMemorySize(),
+  resource.getVirtualCores(), resource.getResources());
+} else {
+  ret = new LightWeightResource(resource.getMemorySize(),
+  resource.getVirtualCores());
 }
 return ret;
   }
@@ -411,7 +404,7 @@ public abstract class Resource implements 
Comparable {
 int arrLenOther = otherResources.length;
 
 // compare memory and vcores first(in that order) to preserve
-// existing behaviour
+// existing behavior.
 for (int i = 0; i < arrLenThis; i++) {
   ResourceInformation otherEntry;
   try {
@@ -483,4 +476,23 @@ public abstract class Resource implements 
Comparable {
 }
 return Long.valueOf(value).intValue();
   }
+
+  /**
+   * Create ResourceInformation with basic fields.
+   * @param name Resource Type Name
+   * @param unit Default unit of provided resource type
+   * @param value Value associated with giveb resource
+   * @return ResourceInformation object
+   */
+  protected static ResourceInformation newDefaultInformation(String name,
+  String unit, long value) {
+ResourceInformation ri = new ResourceInformation();
+ri.setName(name);
+ri.setValue(value);
+

[14/50] [abbrv] hadoop git commit: HADOOP-15025. Ensure singleton for ResourceEstimatorService. (Rui Li via Subru).

2017-11-13 Thread kkaranasos
HADOOP-15025. Ensure singleton for ResourceEstimatorService. (Rui Li via Subru).


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f2df6b89
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f2df6b89
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f2df6b89

Branch: refs/heads/YARN-6592
Commit: f2df6b8983aace73ad27934bd9f7f4d766e0b25f
Parents: 49b4c0b
Author: Subru Krishnan 
Authored: Wed Nov 8 18:07:12 2017 -0800
Committer: Subru Krishnan 
Committed: Wed Nov 8 18:07:12 2017 -0800

--
 .../service/ResourceEstimatorService.java   |  5 ++--
 .../service/TestResourceEstimatorService.java   | 25 +---
 2 files changed, 4 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f2df6b89/hadoop-tools/hadoop-resourceestimator/src/main/java/org/apache/hadoop/resourceestimator/service/ResourceEstimatorService.java
--
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/main/java/org/apache/hadoop/resourceestimator/service/ResourceEstimatorService.java
 
b/hadoop-tools/hadoop-resourceestimator/src/main/java/org/apache/hadoop/resourceestimator/service/ResourceEstimatorService.java
index 0e0e094..5d3aea4 100644
--- 
a/hadoop-tools/hadoop-resourceestimator/src/main/java/org/apache/hadoop/resourceestimator/service/ResourceEstimatorService.java
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/main/java/org/apache/hadoop/resourceestimator/service/ResourceEstimatorService.java
@@ -34,6 +34,7 @@ import javax.ws.rs.PathParam;
 import javax.ws.rs.Produces;
 import javax.ws.rs.core.MediaType;
 
+import com.sun.jersey.spi.resource.Singleton;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.resourceestimator.common.api.RecurrenceId;
 import org.apache.hadoop.resourceestimator.common.api.ResourceSkyline;
@@ -56,13 +57,13 @@ import org.slf4j.LoggerFactory;
 import com.google.gson.Gson;
 import com.google.gson.GsonBuilder;
 import com.google.gson.reflect.TypeToken;
-import com.google.inject.Singleton;
 
 /**
  * Resource Estimator Service which provides a set of REST APIs for users to
  * use the estimation service.
  */
-@Singleton @Path("/resourceestimator") public class ResourceEstimatorService {
+@Singleton
+@Path("/resourceestimator") public class ResourceEstimatorService {
   private static final Logger LOGGER =
   LoggerFactory.getLogger(ResourceEstimatorService.class);
   private final SkylineStore skylineStore;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f2df6b89/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java
--
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java
index 91a486e..785641c 100644
--- 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java
@@ -37,18 +37,12 @@ import 
org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 import com.google.gson.Gson;
 import com.google.gson.GsonBuilder;
 import com.google.gson.reflect.TypeToken;
-import com.google.inject.Guice;
-import com.google.inject.servlet.ServletModule;
 import com.sun.jersey.api.client.WebResource;
-import com.sun.jersey.guice.spi.container.servlet.GuiceContainer;
 import com.sun.jersey.test.framework.JerseyTest;
-import com.sun.jersey.test.framework.WebAppDescriptor;
 
 /**
  * Test ResourceEstimatorService.
@@ -70,29 +64,12 @@ public class TestResourceEstimatorService extends 
JerseyTest {
   private long containerMemAlloc;
   private int containerCPUAlloc;
 
-  private static class WebServletModule extends ServletModule {
-@Override protected void configureServlets() {
-  bind(ResourceEstimatorService.class);
-  serve("/*").with(GuiceContainer.class);
-}
-  }
-
-  static {
-GuiceServletConfig
-.setInjector(Guice.createInjector(new WebServletModule()));
-  }
-
   public TestResourceEstimatorService() {
-super(new WebAppDescriptor.Builder(
-"org.apache.hadoop.resourceestimator.service")
-.contextListenerClass(GuiceServletConfig.class)
-

[34/50] [abbrv] hadoop git commit: YARN-7406. Moving logging APIs over to slf4j in hadoop-yarn-api. Contributed by Yeliang Cang.

2017-11-13 Thread kkaranasos
YARN-7406. Moving logging APIs over to slf4j in hadoop-yarn-api. Contributed by 
Yeliang Cang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2c2b7a36
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2c2b7a36
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2c2b7a36

Branch: refs/heads/YARN-6592
Commit: 2c2b7a3672e0744ce6a77a117cedefba04fed603
Parents: 796a0d3
Author: bibinchundatt 
Authored: Sat Nov 11 10:36:27 2017 +0530
Committer: bibinchundatt 
Committed: Sat Nov 11 10:36:27 2017 +0530

--
 .../main/java/org/apache/hadoop/yarn/conf/HAUtil.java   | 12 ++--
 .../apache/hadoop/yarn/util/resource/ResourceUtils.java |  8 
 2 files changed, 10 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2c2b7a36/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java
index 528b642..60c370b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java
@@ -21,8 +21,6 @@ package org.apache.hadoop.yarn.conf;
 import java.net.InetSocketAddress;
 import java.util.Collection;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
@@ -30,10 +28,12 @@ import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 @InterfaceAudience.Private
 public class HAUtil {
-  private static Log LOG = LogFactory.getLog(HAUtil.class);
+  private static Logger LOG = LoggerFactory.getLogger(HAUtil.class);
 
   @VisibleForTesting
   public static final String BAD_CONFIG_MESSAGE_PREFIX =
@@ -302,9 +302,9 @@ public class HAUtil {
 String confKey = getConfKeyForRMInstance(prefix, conf);
 String retVal = conf.getTrimmed(confKey);
 if (LOG.isTraceEnabled()) {
-  LOG.trace("getConfValueForRMInstance: prefix = " + prefix +
-  "; confKey being looked up = " + confKey +
-  "; value being set to = " + retVal);
+  LOG.trace("getConfValueForRMInstance: prefix = {};" +
+  " confKey being looked up = {};" +
+  " value being set to = {}", prefix, confKey, retVal);
 }
 return retVal;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2c2b7a36/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
index 9c9c0ef8..1170c72 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
@@ -20,8 +20,6 @@ package org.apache.hadoop.yarn.util.resource;
 
 import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes;
 import org.apache.hadoop.yarn.api.records.Resource;
@@ -32,6 +30,8 @@ import 
org.apache.hadoop.yarn.conf.ConfigurationProviderFactory;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.FileNotFoundException;
 import java.io.IOException;
@@ -71,7 +71,7 @@ public class ResourceUtils {
   private static volatile Map 
readOnlyNodeResources;
   private static volatile int numKnownResourceTypes = -1;
 
-  static final Log LOG = LogFactory.getLog(ResourceUtils.class);
+  static final Logger 

[46/50] [abbrv] hadoop git commit: Addendum patch for Configuration fix. (Jason Lowe via asuresh)

2017-11-13 Thread kkaranasos
Addendum patch for Configuration fix. (Jason Lowe via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b07e68b0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b07e68b0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b07e68b0

Branch: refs/heads/YARN-6592
Commit: b07e68b02a34d272114dda4194992a847928aef8
Parents: 4908a89
Author: Arun Suresh 
Authored: Mon Nov 13 14:03:50 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 14:03:50 2017 -0800

--
 .../src/main/java/org/apache/hadoop/conf/Configuration.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b07e68b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index dfbeec7..fce2194 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -2962,7 +2962,8 @@ public class Configuration implements 
Iterable>,
 // xi:include are treated as inline and retain current source
 URL include = getResource(confInclude);
 if (include != null) {
-  Resource classpathResource = new Resource(include, name);
+  Resource classpathResource = new Resource(include, name,
+  wrapper.isParserRestricted());
   loadResource(properties, classpathResource, quiet);
 } else {
   URL url;
@@ -2983,7 +2984,8 @@ public class Configuration implements 
Iterable>,
 }
 url = href.toURI().toURL();
   }
-  Resource uriResource = new Resource(url, name);
+  Resource uriResource = new Resource(url, name,
+  wrapper.isParserRestricted());
   loadResource(properties, uriResource, quiet);
 }
 break;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[37/50] [abbrv] hadoop git commit: YARN-7445. Render Applications and Services page with filters in new YARN UI. Contributed by Vasudevan Skm.

2017-11-13 Thread kkaranasos
YARN-7445. Render Applications and Services page with filters in new YARN UI. 
Contributed by Vasudevan Skm.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fb62bd62
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fb62bd62
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fb62bd62

Branch: refs/heads/YARN-6592
Commit: fb62bd625f53f0407f711317b208a6e4de8e43bc
Parents: 3e26077
Author: Sunil G 
Authored: Mon Nov 13 19:41:49 2017 +0530
Committer: Sunil G 
Committed: Mon Nov 13 19:41:49 2017 +0530

--
 .gitignore  |   3 +
 .../components/em-table-simple-status-cell.js   |  31 ++
 .../webapp/app/controllers/app-table-columns.js |  30 --
 .../webapp/app/controllers/yarn-apps/apps.js|   5 +-
 .../webapp/app/controllers/yarn-services.js |   4 +-
 .../src/main/webapp/app/styles/app.css  | 101 +--
 .../components/em-table-simple-status-cell.hbs  |  27 +
 .../src/main/webapp/app/templates/yarn-apps.hbs |  64 +---
 .../main/webapp/app/templates/yarn-services.hbs |  74 ++
 .../hadoop-yarn-ui/src/main/webapp/bower.json   |   3 +-
 .../src/main/webapp/config/environment.js   |   1 -
 .../src/main/webapp/ember-cli-build.js  |   1 +
 .../src/main/webapp/jsconfig.json   |  10 +-
 .../hadoop-yarn-ui/src/main/webapp/package.json |   2 +-
 .../em-table-simple-status-cell-test.js |  43 
 .../hadoop-yarn-ui/src/main/webapp/yarn.lock|   6 +-
 16 files changed, 250 insertions(+), 155 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb62bd62/.gitignore
--
diff --git a/.gitignore b/.gitignore
index 724162d..817556f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -44,3 +44,6 @@ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/dist
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tmp
 yarnregistry.pdf
 patchprocess/
+
+
+.history/
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb62bd62/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-simple-status-cell.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-simple-status-cell.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-simple-status-cell.js
new file mode 100644
index 000..af8b605
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-simple-status-cell.js
@@ -0,0 +1,31 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Component.extend({
+  content: null,
+
+  classNames: ["em-table-simple-status-cell"],
+
+  statusName: Ember.computed("content", function () {
+var status = this.get("content");
+
+return status.toLowerCase().capitalize();
+  }),
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb62bd62/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
index 8a34f1a..05bfad45 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
@@ -34,7 +34,8 @@ export default Ember.Controller.extend({
   headerTitle: 'Application ID',
   contentPath: 'id',
   cellComponentName: 'em-table-linked-cell',
-  minWidth: "250px",
+  minWidth: "280px",
+  

[32/50] [abbrv] hadoop git commit: HDFS-12498. Journal Syncer is not started in Federated + HA cluster. Contributed by Bharat Viswanadham.

2017-11-13 Thread kkaranasos
HDFS-12498. Journal Syncer is not started in Federated + HA cluster. 
Contributed by Bharat Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6d201f77
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6d201f77
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6d201f77

Branch: refs/heads/YARN-6592
Commit: 6d201f77c734d6c6a9e3e297fe3dbff251cbb8b3
Parents: 1d6f8be
Author: Arpit Agarwal 
Authored: Fri Nov 10 16:30:38 2017 -0800
Committer: Arpit Agarwal 
Committed: Fri Nov 10 16:30:38 2017 -0800

--
 .../hdfs/qjournal/server/JournalNodeSyncer.java |  55 --
 .../hdfs/qjournal/server/TestJournalNode.java   | 103 ++-
 2 files changed, 146 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6d201f77/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
index cf5a9ec..490b3ea 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
@@ -26,6 +26,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 
+import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.apache.hadoop.hdfs.protocolPB.PBHelper;
 import org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos;
 import org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos
@@ -51,6 +52,8 @@ import java.net.MalformedURLException;
 import java.net.URI;
 import java.net.URISyntaxException;
 import java.net.URL;
+import java.util.Collection;
+import java.util.HashSet;
 import java.util.List;
 
 /**
@@ -263,25 +266,63 @@ public class JournalNodeSyncer {
   }
 
   private List getOtherJournalNodeAddrs() {
-URI uri = null;
+String uriStr = "";
 try {
-  String uriStr = 
conf.get(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY);
+  uriStr = 
conf.getTrimmed(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY);
+
+  if (uriStr == null || uriStr.isEmpty()) {
+if (nameServiceId != null) {
+  uriStr = conf.getTrimmed(DFSConfigKeys
+  .DFS_NAMENODE_SHARED_EDITS_DIR_KEY + "." + nameServiceId);
+}
+  }
+
   if (uriStr == null || uriStr.isEmpty()) {
-LOG.warn("Could not construct Shared Edits Uri");
+HashSet sharedEditsUri = Sets.newHashSet();
+if (nameServiceId != null) {
+  Collection nnIds = DFSUtilClient.getNameNodeIds(
+  conf, nameServiceId);
+  for (String nnId : nnIds) {
+String suffix = nameServiceId + "." + nnId;
+uriStr = conf.getTrimmed(DFSConfigKeys
+.DFS_NAMENODE_SHARED_EDITS_DIR_KEY + "." + suffix);
+sharedEditsUri.add(uriStr);
+  }
+  if (sharedEditsUri.size() > 1) {
+uriStr = null;
+LOG.error("The conf property " + DFSConfigKeys
+.DFS_NAMENODE_SHARED_EDITS_DIR_KEY + " not set properly, " +
+"it has been configured with different journalnode values " +
+sharedEditsUri.toString() + " for a" +
+" single nameserviceId" + nameServiceId);
+  }
+}
+  }
+
+  if (uriStr == null || uriStr.isEmpty()) {
+LOG.error("Could not construct Shared Edits Uri");
 return null;
+  } else {
+return getJournalAddrList(uriStr);
   }
-  uri = new URI(uriStr);
-  return Util.getLoggerAddresses(uri,
-  Sets.newHashSet(jn.getBoundIpcAddress()));
+
 } catch (URISyntaxException e) {
   LOG.error("The conf property " + DFSConfigKeys
   .DFS_NAMENODE_SHARED_EDITS_DIR_KEY + " not set properly.");
 } catch (IOException e) {
-  LOG.error("Could not parse JournalNode addresses: " + uri);
+  LOG.error("Could not parse JournalNode addresses: " + uriStr);
 }
 return null;
   }
 
+  private List getJournalAddrList(String uriStr) throws
+  URISyntaxException,
+  IOException {
+URI uri = new URI(uriStr);
+return Util.getLoggerAddresses(uri,
+Sets.newHashSet(jn.getBoundIpcAddress()));
+  }
+
   private JournalIdProto convertJournalId(String journalId) {
 return QJournalProtocolProtos.JournalIdProto.newBuilder()
   .setIdentifier(journalId)


[28/50] [abbrv] hadoop git commit: HDFS-12791. NameNode Fsck http Connection can timeout for directories with multiple levels. Contributed by Mukul Kumar Singh.

2017-11-13 Thread kkaranasos
HDFS-12791. NameNode Fsck http Connection can timeout for directories with 
multiple levels. Contributed by Mukul Kumar Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/10a1f557
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/10a1f557
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/10a1f557

Branch: refs/heads/YARN-6592
Commit: 10a1f557e775e7a55958b106dd10021ac7394843
Parents: 5eb7dbe
Author: Chen Liang 
Authored: Thu Nov 9 18:47:34 2017 -0800
Committer: Chen Liang 
Committed: Thu Nov 9 18:47:34 2017 -0800

--
 .../apache/hadoop/hdfs/server/namenode/NamenodeFsck.java | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/10a1f557/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
index 5872955..b6d6971 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
@@ -471,6 +471,13 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   void check(String parent, HdfsFileStatus file, Result replRes, Result ecRes)
   throws IOException {
 String path = file.getFullName(parent);
+if (showprogress &&
+(totalDirs + totalSymlinks + replRes.totalFiles + ecRes.totalFiles)
+% 100 == 0) {
+  out.println();
+  out.flush();
+}
+
 if (file.isDirectory()) {
   checkDir(path, replRes, ecRes);
   return;
@@ -489,10 +496,6 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
 
 final Result r = file.getErasureCodingPolicy() != null ? ecRes: replRes;
 collectFileSummary(path, file, r, blocks);
-if (showprogress && (replRes.totalFiles + ecRes.totalFiles) % 100 == 0) {
-  out.println();
-  out.flush();
-}
 collectBlocksSummary(parent, file, r, blocks);
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[49/50] [abbrv] hadoop git commit: YARN-6594. [API] Introduce SchedulingRequest object. (Konstantinos Karanasos via wangda)

2017-11-13 Thread kkaranasos
YARN-6594. [API] Introduce SchedulingRequest object. (Konstantinos Karanasos 
via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1670b350
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1670b350
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1670b350

Branch: refs/heads/YARN-6592
Commit: 1670b350ee41ef1632f314679ab29c53a0a459f4
Parents: 9cd3794
Author: Wangda Tan 
Authored: Mon Oct 30 16:54:02 2017 -0700
Committer: Konstantinos Karanasos 
Committed: Mon Nov 13 15:04:05 2017 -0800

--
 .../hadoop/yarn/api/records/ResourceSizing.java |  64 +
 .../yarn/api/records/SchedulingRequest.java | 205 ++
 .../src/main/proto/yarn_protos.proto|  14 +
 .../records/impl/pb/ResourceSizingPBImpl.java   | 117 
 .../impl/pb/SchedulingRequestPBImpl.java| 266 +++
 5 files changed, 666 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1670b350/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
new file mode 100644
index 000..d82be11
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
@@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.records;
+
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.util.Records;
+
+/**
+ * {@code ResourceSizing} contains information for the size of a
+ * {@link SchedulingRequest}, such as the number of requested allocations and
+ * the resources for each allocation.
+ */
+@Public
+@Unstable
+public abstract class ResourceSizing {
+
+  @Public
+  @Unstable
+  public static ResourceSizing newInstance(Resource resources) {
+return ResourceSizing.newInstance(1, resources);
+  }
+
+  @Public
+  @Unstable
+  public static ResourceSizing newInstance(int numAllocations, Resource 
resources) {
+ResourceSizing resourceSizing = Records.newRecord(ResourceSizing.class);
+resourceSizing.setNumAllocations(numAllocations);
+resourceSizing.setResources(resources);
+return resourceSizing;
+  }
+
+  @Public
+  @Unstable
+  public abstract int getNumAllocations();
+
+  @Public
+  @Unstable
+  public abstract void setNumAllocations(int numAllocations);
+
+  @Public
+  @Unstable
+  public abstract Resource getResources();
+
+  @Public
+  @Unstable
+  public abstract void setResources(Resource resources);
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1670b350/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
new file mode 100644
index 000..47a0697
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this 

[48/50] [abbrv] hadoop git commit: YARN-6593. [API] Introduce Placement Constraint object. (Konstantinos Karanasos via wangda)

2017-11-13 Thread kkaranasos
YARN-6593. [API] Introduce Placement Constraint object. (Konstantinos Karanasos 
via wangda)

Change-Id: Id00edb7185fdf01cce6e40f920cac3585f8cbe9c


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9cd3794a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9cd3794a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9cd3794a

Branch: refs/heads/YARN-6592
Commit: 9cd3794abe474f2541167e93cf4e894c6a5f0cc3
Parents: f871b75
Author: Wangda Tan 
Authored: Thu Aug 3 14:03:55 2017 -0700
Committer: Konstantinos Karanasos 
Committed: Mon Nov 13 15:04:05 2017 -0800

--
 .../yarn/api/resource/PlacementConstraint.java  | 567 +++
 .../yarn/api/resource/PlacementConstraints.java | 286 ++
 .../hadoop/yarn/api/resource/package-info.java  |  23 +
 .../src/main/proto/yarn_protos.proto|  55 ++
 .../api/resource/TestPlacementConstraints.java  | 106 
 .../PlacementConstraintFromProtoConverter.java  | 116 
 .../pb/PlacementConstraintToProtoConverter.java | 174 ++
 .../apache/hadoop/yarn/api/pb/package-info.java |  23 +
 .../yarn/api/records/impl/pb/ProtoUtils.java|  27 +
 .../PlacementConstraintTransformations.java | 209 +++
 .../hadoop/yarn/api/resource/package-info.java  |  23 +
 .../TestPlacementConstraintPBConversion.java| 195 +++
 .../TestPlacementConstraintTransformations.java | 183 ++
 13 files changed, 1987 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9cd3794a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
new file mode 100644
index 000..f0e3982
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
@@ -0,0 +1,567 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.resource;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+
+/**
+ * {@code PlacementConstraint} represents a placement constraint for a resource
+ * allocation.
+ */
+@Public
+@Unstable
+public class PlacementConstraint {
+
+  /**
+   * The constraint expression tree.
+   */
+  private AbstractConstraint constraintExpr;
+
+  public PlacementConstraint(AbstractConstraint constraintExpr) {
+this.constraintExpr = constraintExpr;
+  }
+
+  /**
+   * Get the constraint expression of the placement constraint.
+   *
+   * @return the constraint expression
+   */
+  public AbstractConstraint getConstraintExpr() {
+return constraintExpr;
+  }
+
+  /**
+   * Interface used to enable the elements of the constraint tree to be 
visited.
+   */
+  @Private
+  public interface Visitable {
+/**
+ * Visitor pattern.
+ *
+ * @param visitor visitor to be used
+ * @param  defines the type that the visitor will use and the return 
type
+ *  of the accept.
+ * @return the result of visiting a given object.
+ */
+ T accept(Visitor visitor);
+
+  }
+
+  /**
+   * Visitor API for a constraint tree.
+   *
+   * @param  determines the return type of the visit methods.
+   */
+  @Private
+  public interface Visitor {
+T visit(SingleConstraint constraint);
+
+T visit(TargetExpression target);
+
+T visit(TargetConstraint constraint);
+
+T visit(CardinalityConstraint constraint);
+
+T 

[31/50] [abbrv] hadoop git commit: HADOOP-14929. Cleanup usage of decodecomponent and use QueryStringDecoder from netty. Contributed by Bharat Viswanadham.

2017-11-13 Thread kkaranasos
HADOOP-14929. Cleanup usage of decodecomponent and use QueryStringDecoder from 
netty. Contributed by Bharat Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1d6f8beb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1d6f8beb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1d6f8beb

Branch: refs/heads/YARN-6592
Commit: 1d6f8bebe9d20c958e419c140109e3d9fec8cb46
Parents: 8a1bd9a
Author: Arpit Agarwal 
Authored: Fri Nov 10 16:28:12 2017 -0800
Committer: Arpit Agarwal 
Committed: Fri Nov 10 16:28:12 2017 -0800

--
 .../datanode/web/webhdfs/ParameterParser.java   | 66 ++--
 .../web/webhdfs/TestParameterParser.java| 81 +++-
 2 files changed, 87 insertions(+), 60 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d6f8beb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
index 16380e5..2b3a393 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
@@ -44,7 +44,6 @@ import org.apache.hadoop.security.token.Token;
 
 import java.io.IOException;
 import java.net.URI;
-import java.nio.charset.Charset;
 import java.nio.charset.StandardCharsets;
 import java.util.EnumSet;
 import java.util.List;
@@ -143,9 +142,13 @@ class ParameterParser {
   }
 
   public EnumSet createFlag() {
-String cf =
-decodeComponent(param(CreateFlagParam.NAME), StandardCharsets.UTF_8);
-
+String cf = "";
+if (param(CreateFlagParam.NAME) != null) {
+  QueryStringDecoder decoder = new QueryStringDecoder(
+  param(CreateFlagParam.NAME),
+  StandardCharsets.UTF_8);
+  cf = decoder.path();
+}
 return new CreateFlagParam(cf).getValue();
   }
 
@@ -159,61 +162,6 @@ class ParameterParser {
   }
 
   /**
-   * The following function behaves exactly the same as netty's
-   * QueryStringDecoder#decodeComponent except that it
-   * does not decode the '+' character as space. WebHDFS takes this scheme
-   * to maintain the backward-compatibility for pre-2.7 releases.
-   */
-  private static String decodeComponent(final String s, final Charset charset) 
{
-if (s == null) {
-  return "";
-}
-final int size = s.length();
-boolean modified = false;
-for (int i = 0; i < size; i++) {
-  final char c = s.charAt(i);
-  if (c == '%' || c == '+') {
-modified = true;
-break;
-  }
-}
-if (!modified) {
-  return s;
-}
-final byte[] buf = new byte[size];
-int pos = 0;  // position in `buf'.
-for (int i = 0; i < size; i++) {
-  char c = s.charAt(i);
-  if (c == '%') {
-if (i == size - 1) {
-  throw new IllegalArgumentException("unterminated escape sequence at" 
+
- " end of string: " + s);
-}
-c = s.charAt(++i);
-if (c == '%') {
-  buf[pos++] = '%';  // "%%" -> "%"
-  break;
-}
-if (i == size - 1) {
-  throw new IllegalArgumentException("partial escape sequence at end " 
+
- "of string: " + s);
-}
-c = decodeHexNibble(c);
-final char c2 = decodeHexNibble(s.charAt(++i));
-if (c == Character.MAX_VALUE || c2 == Character.MAX_VALUE) {
-  throw new IllegalArgumentException(
-  "invalid escape sequence `%" + s.charAt(i - 1) + s.charAt(
-  i) + "' at index " + (i - 2) + " of: " + s);
-}
-c = (char) (c * 16 + c2);
-// Fall through.
-  }
-  buf[pos++] = (byte) c;
-}
-return new String(buf, 0, pos, charset);
-  }
-
-  /**
* Helper to decode half of a hexadecimal number from a string.
* @param c The ASCII character of the hexadecimal number to decode.
* Must be in the range {@code [0-9a-fA-F]}.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d6f8beb/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestParameterParser.java
--
diff --git 

[18/50] [abbrv] hadoop git commit: HADOOP-15012. Add readahead, dropbehind, and unbuffer to StreamCapabilities. Contributed by John Zhuge.

2017-11-13 Thread kkaranasos
HADOOP-15012. Add readahead, dropbehind, and unbuffer to StreamCapabilities. 
Contributed by John Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bf6a6602
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bf6a6602
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bf6a6602

Branch: refs/heads/YARN-6592
Commit: bf6a660232b01642b07697a289c773ea5b97217c
Parents: 0a72c2f
Author: John Zhuge 
Authored: Mon Nov 6 23:54:27 2017 -0800
Committer: John Zhuge 
Committed: Thu Nov 9 10:16:12 2017 -0800

--
 .../org/apache/hadoop/fs/FSDataInputStream.java | 15 +++---
 .../apache/hadoop/fs/StreamCapabilities.java| 48 +-
 .../hadoop/fs/StreamCapabilitiesPolicy.java | 51 
 .../src/site/markdown/filesystem/filesystem.md  | 21 +---
 .../org/apache/hadoop/hdfs/DFSInputStream.java  | 16 +-
 .../org/apache/hadoop/hdfs/DFSOutputStream.java | 12 ++---
 .../hadoop/fs/azure/BlockBlobAppendStream.java  | 17 ---
 7 files changed, 140 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf6a6602/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
index a80279d..08d71f1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
@@ -38,7 +38,7 @@ import org.apache.hadoop.util.IdentityHashStore;
 public class FSDataInputStream extends DataInputStream
 implements Seekable, PositionedReadable, 
   ByteBufferReadable, HasFileDescriptor, CanSetDropBehind, CanSetReadahead,
-  HasEnhancedByteBufferAccess, CanUnbuffer {
+  HasEnhancedByteBufferAccess, CanUnbuffer, StreamCapabilities {
   /**
* Map ByteBuffers that we have handed out to readers to ByteBufferPool 
* objects
@@ -227,12 +227,15 @@ public class FSDataInputStream extends DataInputStream
 
   @Override
   public void unbuffer() {
-try {
-  ((CanUnbuffer)in).unbuffer();
-} catch (ClassCastException e) {
-  throw new UnsupportedOperationException("this stream " +
-  in.getClass().getName() + " does not " + "support unbuffering.");
+StreamCapabilitiesPolicy.unbuffer(in);
+  }
+
+  @Override
+  public boolean hasCapability(String capability) {
+if (in instanceof StreamCapabilities) {
+  return ((StreamCapabilities) in).hasCapability(capability);
 }
+return false;
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf6a6602/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilities.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilities.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilities.java
index 65aa679..3549cdc 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilities.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilities.java
@@ -23,27 +23,49 @@ import org.apache.hadoop.classification.InterfaceStability;
 
 /**
  * Interface to query streams for supported capabilities.
+ *
+ * Capability strings must be in lower case.
+ *
+ * Constant strings are chosen over enums in order to allow other file systems
+ * to define their own capabilities.
  */
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 public interface StreamCapabilities {
   /**
+   * Stream hflush capability implemented by {@link Syncable#hflush()}.
+   */
+  String HFLUSH = "hflush";
+
+  /**
+   * Stream hsync capability implemented by {@link Syncable#hsync()}.
+   */
+  String HSYNC = "hsync";
+
+  /**
+   * Stream setReadahead capability implemented by
+   * {@link CanSetReadahead#setReadahead(Long)}.
+   */
+  String READAHEAD = "in:readahead";
+
+  /**
+   * Stream setDropBehind capability implemented by
+   * {@link CanSetDropBehind#setDropBehind(Boolean)}.
+   */
+  String DROPBEHIND = "dropbehind";
+
+  /**
+   * Stream unbuffer capability implemented by {@link CanUnbuffer#unbuffer()}.
+   */
+  String UNBUFFER = "in:unbuffer";
+
+  /**
* Capabilities that a stream can support and be queried for.
*/
+  @Deprecated
   enum StreamCapability {
-/**
- * Stream hflush capability to flush out the data in client's 

[27/50] [abbrv] hadoop git commit: Fixing Job History Server Configuration parsing. (Jason Lowe via asuresh)

2017-11-13 Thread kkaranasos
Fixing Job History Server Configuration parsing. (Jason Lowe via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5eb7dbe9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5eb7dbe9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5eb7dbe9

Branch: refs/heads/YARN-6592
Commit: 5eb7dbe9b31a45f57f2e1623aa1c9ce84a56c4d1
Parents: dd07038
Author: Arun Suresh 
Authored: Thu Nov 9 15:15:51 2017 -0800
Committer: Arun Suresh 
Committed: Thu Nov 9 15:15:51 2017 -0800

--
 .../org/apache/hadoop/conf/Configuration.java   | 163 ++-
 .../mapreduce/v2/hs/HistoryFileManager.java |   2 +-
 2 files changed, 122 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5eb7dbe9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index f94eba6..dfbeec7 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.conf;
 
+import com.ctc.wstx.api.ReaderConfig;
 import com.ctc.wstx.io.StreamBootstrapper;
 import com.ctc.wstx.io.SystemId;
 import com.ctc.wstx.stax.WstxInputFactory;
@@ -70,6 +71,7 @@ import java.util.concurrent.atomic.AtomicReference;
 
 import javax.xml.parsers.DocumentBuilderFactory;
 import javax.xml.parsers.ParserConfigurationException;
+import javax.xml.stream.XMLInputFactory;
 import javax.xml.stream.XMLStreamConstants;
 import javax.xml.stream.XMLStreamException;
 import javax.xml.stream.XMLStreamReader;
@@ -91,6 +93,7 @@ import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.WritableUtils;
 import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.alias.CredentialProvider;
 import org.apache.hadoop.security.alias.CredentialProvider.CredentialEntry;
 import org.apache.hadoop.security.alias.CredentialProviderFactory;
@@ -206,19 +209,31 @@ public class Configuration implements 
Iterable>,
   private static final String DEFAULT_STRING_CHECK =
 "testingforemptydefaultvalue";
 
+  private static boolean restrictSystemPropsDefault = false;
+  private boolean restrictSystemProps = restrictSystemPropsDefault;
   private boolean allowNullValueProperties = false;
 
   private static class Resource {
 private final Object resource;
 private final String name;
+private final boolean restrictParser;
 
 public Resource(Object resource) {
   this(resource, resource.toString());
 }
-
+
+public Resource(Object resource, boolean useRestrictedParser) {
+  this(resource, resource.toString(), useRestrictedParser);
+}
+
 public Resource(Object resource, String name) {
+  this(resource, name, getRestrictParserDefault(resource));
+}
+
+public Resource(Object resource, String name, boolean restrictParser) {
   this.resource = resource;
   this.name = name;
+  this.restrictParser = restrictParser;
 }
 
 public String getName(){
@@ -228,11 +243,28 @@ public class Configuration implements 
Iterable>,
 public Object getResource() {
   return resource;
 }
-
+
+public boolean isParserRestricted() {
+  return restrictParser;
+}
+
 @Override
 public String toString() {
   return name;
 }
+
+private static boolean getRestrictParserDefault(Object resource) {
+  if (resource instanceof String) {
+return false;
+  }
+  UserGroupInformation user;
+  try {
+user = UserGroupInformation.getCurrentUser();
+  } catch (IOException e) {
+throw new RuntimeException("Unable to determine current user", e);
+  }
+  return user.getRealUser() != null;
+}
   }
   
   /**
@@ -254,7 +286,7 @@ public class Configuration implements 
Iterable>,
   new ConcurrentHashMap());
   
   private boolean loadDefaults = true;
-  
+
   /**
* Configuration objects
*/
@@ -777,6 +809,7 @@ public class Configuration implements 
Iterable>,
 this.overlay = (Properties)other.overlay.clone();
   }
 
+  this.restrictSystemProps = other.restrictSystemProps;
   if (other.updatingResource != null) {

[23/50] [abbrv] hadoop git commit: YARN-7437. Rename PlacementSet and SchedulingPlacementSet. (Wangda Tan via kkaranasos)

2017-11-13 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac4d2b10/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/LocalityAppPlacementAllocator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/LocalityAppPlacementAllocator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/LocalityAppPlacementAllocator.java
new file mode 100644
index 000..7f89435
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/LocalityAppPlacementAllocator.java
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement;
+
+import org.apache.commons.collections.IteratorUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.yarn.api.records.ResourceRequest;
+import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.PendingAsk;
+import org.apache.hadoop.yarn.server.scheduler.SchedulerRequestKey;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+/**
+ * This is an implementation of the {@link AppPlacementAllocator} that takes
+ * into account locality preferences (node, rack, any) when allocating
+ * containers.
+ */
+public class LocalityAppPlacementAllocator
+implements AppPlacementAllocator {
+  private static final Log LOG =
+  LogFactory.getLog(LocalityAppPlacementAllocator.class);
+
+  private final Map resourceRequestMap =
+  new ConcurrentHashMap<>();
+  private AppSchedulingInfo appSchedulingInfo;
+  private volatile String primaryRequestedPartition =
+  RMNodeLabelsManager.NO_LABEL;
+
+  private final ReentrantReadWriteLock.ReadLock readLock;
+  private final ReentrantReadWriteLock.WriteLock writeLock;
+
+  public LocalityAppPlacementAllocator(AppSchedulingInfo info) {
+ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+readLock = lock.readLock();
+writeLock = lock.writeLock();
+this.appSchedulingInfo = info;
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  public Iterator getPreferredNodeIterator(
+  CandidateNodeSet candidateNodeSet) {
+// Now only handle the case that single node in the candidateNodeSet
+// TODO, Add support to multi-hosts inside candidateNodeSet which is passed
+// in.
+
+N singleNode = CandidateNodeSetUtils.getSingleNode(candidateNodeSet);
+if (null != singleNode) {
+  return IteratorUtils.singletonIterator(singleNode);
+}
+
+return IteratorUtils.emptyIterator();
+  }
+
+  private boolean hasRequestLabelChanged(ResourceRequest requestOne,
+  ResourceRequest requestTwo) {
+String requestOneLabelExp = requestOne.getNodeLabelExpression();
+String requestTwoLabelExp = requestTwo.getNodeLabelExpression();
+// First request label expression can be null and second request
+// is not null then we have to consider it as changed.
+if ((null == requestOneLabelExp) && (null != requestTwoLabelExp)) {
+  return 

[35/50] [abbrv] hadoop git commit: YARN-7452. Decommissioning node default value to be zero in new YARN UI. Contributed by Vasudevan Skm.

2017-11-13 Thread kkaranasos
YARN-7452. Decommissioning node default value to be zero in new YARN UI. 
Contributed by Vasudevan Skm.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ff9f7fcf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ff9f7fcf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ff9f7fcf

Branch: refs/heads/YARN-6592
Commit: ff9f7fcf7f67095f3ab9d257624dee6e16363b1e
Parents: 2c2b7a3
Author: Sunil G 
Authored: Sat Nov 11 16:52:31 2017 +0530
Committer: Sunil G 
Committed: Sat Nov 11 16:52:31 2017 +0530

--
 .../hadoop-yarn-ui/src/main/webapp/app/models/cluster-metric.js| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ff9f7fcf/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/cluster-metric.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/cluster-metric.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/cluster-metric.js
index dcc0c29..0be0d83 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/cluster-metric.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/cluster-metric.js
@@ -89,7 +89,7 @@ export default DS.Model.extend({
 });
 arr.push({
   label: "Decommissioning",
-  value: this.get("decommissioningNodes")
+  value: this.get("decommissioningNodes") || 0
 });
 arr.push({
   label: "Decomissioned",


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[09/50] [abbrv] hadoop git commit: MAPREDUCE-7001. Moving logging APIs over to slf4j in hadoop-mapreduce-client-shuffle. Contributed by Jinjiang Ling.

2017-11-13 Thread kkaranasos
MAPREDUCE-7001. Moving logging APIs over to slf4j in 
hadoop-mapreduce-client-shuffle. Contributed by Jinjiang Ling.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e4c220ee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e4c220ee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e4c220ee

Branch: refs/heads/YARN-6592
Commit: e4c220ee4fdc4550275bda0fa9468d7d87d143c3
Parents: ffee10b
Author: Akira Ajisaka 
Authored: Wed Nov 8 19:28:08 2017 +0900
Committer: Akira Ajisaka 
Committed: Wed Nov 8 19:28:08 2017 +0900

--
 .../apache/hadoop/mapred/FadvisedChunkedFile.java |  7 ---
 .../apache/hadoop/mapred/FadvisedFileRegion.java  |  7 ---
 .../org/apache/hadoop/mapred/ShuffleHandler.java  | 13 +++--
 .../hadoop/mapred/TestFadvisedFileRegion.java | 18 +-
 .../apache/hadoop/mapred/TestShuffleHandler.java  |  7 ---
 5 files changed, 28 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e4c220ee/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedChunkedFile.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedChunkedFile.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedChunkedFile.java
index 7e24e89..6a4e3b4 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedChunkedFile.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedChunkedFile.java
@@ -22,11 +22,11 @@ import java.io.FileDescriptor;
 import java.io.IOException;
 import java.io.RandomAccessFile;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.io.ReadaheadPool;
 import org.apache.hadoop.io.ReadaheadPool.ReadaheadRequest;
 import org.apache.hadoop.io.nativeio.NativeIO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import static org.apache.hadoop.io.nativeio.NativeIO.POSIX.POSIX_FADV_DONTNEED;
 
@@ -34,7 +34,8 @@ import org.jboss.netty.handler.stream.ChunkedFile;
 
 public class FadvisedChunkedFile extends ChunkedFile {
 
-  private static final Log LOG = LogFactory.getLog(FadvisedChunkedFile.class);
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FadvisedChunkedFile.class);
 
   private final boolean manageOsCache;
   private final int readaheadLength;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e4c220ee/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
index 79045f9..4b2c8cb 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
@@ -25,11 +25,11 @@ import java.nio.ByteBuffer;
 import java.nio.channels.FileChannel;
 import java.nio.channels.WritableByteChannel;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.io.ReadaheadPool;
 import org.apache.hadoop.io.ReadaheadPool.ReadaheadRequest;
 import org.apache.hadoop.io.nativeio.NativeIO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import static org.apache.hadoop.io.nativeio.NativeIO.POSIX.POSIX_FADV_DONTNEED;
 
@@ -39,7 +39,8 @@ import com.google.common.annotations.VisibleForTesting;
 
 public class FadvisedFileRegion extends DefaultFileRegion {
 
-  private static final Log LOG = LogFactory.getLog(FadvisedFileRegion.class);
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FadvisedFileRegion.class);
 
   private final boolean manageOsCache;
   private final int readaheadLength;


[10/50] [abbrv] hadoop git commit: YARN-7453. Fix issue where RM fails to switch to active after first successful start. (Rohith Sharma K S via asuresh)

2017-11-13 Thread kkaranasos
YARN-7453. Fix issue where RM fails to switch to active after first successful 
start. (Rohith Sharma K S via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a9c70b0e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a9c70b0e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a9c70b0e

Branch: refs/heads/YARN-6592
Commit: a9c70b0e84dab0c41e480a0dc0cb1a22efdc64ee
Parents: e4c220e
Author: Arun Suresh 
Authored: Wed Nov 8 08:00:53 2017 -0800
Committer: Arun Suresh 
Committed: Wed Nov 8 08:00:53 2017 -0800

--
 .../yarn/server/resourcemanager/ResourceManager.java| 12 ++--
 .../server/resourcemanager/recovery/ZKRMStateStore.java |  5 -
 .../scheduler/capacity/conf/ZKConfigurationStore.java   |  3 ++-
 3 files changed, 12 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a9c70b0e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
index 07f5a76..727bc52 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
@@ -358,7 +358,7 @@ public class ResourceManager extends CompositeService 
implements Recoverable {
 conf.getBoolean(YarnConfiguration.CURATOR_LEADER_ELECTOR,
 YarnConfiguration.DEFAULT_CURATOR_LEADER_ELECTOR_ENABLED);
 if (curatorEnabled) {
-  this.zkManager = getAndStartZKManager(conf);
+  this.zkManager = createAndStartZKManager(conf);
   elector = new CuratorBasedElectorService(this);
 } else {
   elector = new ActiveStandbyElectorBasedElectorService(this);
@@ -372,11 +372,8 @@ public class ResourceManager extends CompositeService 
implements Recoverable {
* @return ZooKeeper Curator manager.
* @throws IOException If it cannot create the manager.
*/
-  public synchronized ZKCuratorManager getAndStartZKManager(Configuration
+  public ZKCuratorManager createAndStartZKManager(Configuration
   config) throws IOException {
-if (this.zkManager != null) {
-  return zkManager;
-}
 ZKCuratorManager manager = new ZKCuratorManager(config);
 
 // Get authentication
@@ -396,7 +393,10 @@ public class ResourceManager extends CompositeService 
implements Recoverable {
 }
 
 manager.start(authInfos);
-this.zkManager = manager;
+return manager;
+  }
+
+  public ZKCuratorManager getZKManager() {
 return zkManager;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a9c70b0e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
index 5d3ca45..36b55e5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
@@ -358,7 +358,10 @@ public class ZKRMStateStore extends RMStateStore {
 amrmTokenSecretManagerRoot =
 getNodePath(zkRootNodePath, AMRMTOKEN_SECRET_MANAGER_ROOT);
 reservationRoot = getNodePath(zkRootNodePath, RESERVATION_SYSTEM_ROOT);
-zkManager = resourceManager.getAndStartZKManager(conf);
+zkManager = resourceManager.getZKManager();
+if(zkManager==null) {
+  zkManager = resourceManager.createAndStartZKManager(conf);
+}
 delegationTokenNodeSplitIndex =
 

[05/50] [abbrv] hadoop git commit: HADOOP-15018. Update JAVA_HOME in create-release for Xenial Dockerfile.

2017-11-13 Thread kkaranasos
HADOOP-15018. Update JAVA_HOME in create-release for Xenial Dockerfile.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/51e882d5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/51e882d5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/51e882d5

Branch: refs/heads/YARN-6592
Commit: 51e882d5c9fd2f55cd9ac2eafd3e59eb4f239d9d
Parents: 8db9d61
Author: Andrew Wang 
Authored: Tue Nov 7 16:38:53 2017 -0800
Committer: Andrew Wang 
Committed: Tue Nov 7 16:39:04 2017 -0800

--
 dev-support/bin/create-release | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/51e882d5/dev-support/bin/create-release
--
diff --git a/dev-support/bin/create-release b/dev-support/bin/create-release
index b98c058..694820b 100755
--- a/dev-support/bin/create-release
+++ b/dev-support/bin/create-release
@@ -489,9 +489,9 @@ function dockermode
 echo "RUN mkdir -p /maven"
 echo "RUN chown -R ${user_name} /maven"
 
-# we always force build with the Oracle JDK
+# we always force build with the OpenJDK JDK
 # but with the correct version
-echo "ENV JAVA_HOME /usr/lib/jvm/java-${JVM_VERSION}-oracle"
+echo "ENV JAVA_HOME /usr/lib/jvm/java-${JVM_VERSION}-openjdk-amd64"
 echo "USER ${user_name}"
 printf "\n\n"
   ) | docker build -t "${imgname}" -


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[15/50] [abbrv] hadoop git commit: HDFS-12732. Correct spellings of ramdomly to randomly in log. Contributed by hu xiaodong.

2017-11-13 Thread kkaranasos
HDFS-12732. Correct spellings of ramdomly to randomly in log. Contributed by hu 
xiaodong.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3a3566e1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3a3566e1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3a3566e1

Branch: refs/heads/YARN-6592
Commit: 3a3566e1d1ab5f78cfb734796b41802fe039196d
Parents: f2df6b8
Author: Akira Ajisaka 
Authored: Thu Nov 9 15:13:21 2017 +0900
Committer: Akira Ajisaka 
Committed: Thu Nov 9 15:14:46 2017 +0900

--
 .../hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3a3566e1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
index a479397..b925feb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
@@ -614,7 +614,7 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 
   if (LOG.isDebugEnabled()) {
 LOG.debug("Failed to choose from local rack (location = " + localRack
-+ "); the second replica is not found, retry choosing ramdomly", 
e);
++ "); the second replica is not found, retry choosing randomly", 
e);
   }
   //the second replica is not found, randomly choose one from the network
   return chooseRandom(NodeBase.ROOT, excludedNodes, blocksize,
@@ -636,7 +636,7 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 } catch(NotEnoughReplicasException e) {
   if (LOG.isDebugEnabled()) {
 LOG.debug("Failed to choose from the next rack (location = " + nextRack
-+ "), retry choosing ramdomly", e);
++ "), retry choosing randomly", e);
   }
   //otherwise randomly choose one from the network
   return chooseRandom(NodeBase.ROOT, excludedNodes, blocksize,


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[13/50] [abbrv] hadoop git commit: YARN-7458. TestContainerManagerSecurity is still flakey (Contributed by Robert Kanter via Daniel Templeton)

2017-11-13 Thread kkaranasos
YARN-7458. TestContainerManagerSecurity is still flakey
(Contributed by Robert Kanter via Daniel Templeton)

Change-Id: Ibb1975ad086c3a33f8af0b4f8b9a13c3cdca3f7d


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/49b4c0b3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/49b4c0b3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/49b4c0b3

Branch: refs/heads/YARN-6592
Commit: 49b4c0b334e5472dbbf71b042a6a6b1d4b2ce3b7
Parents: 0de1068
Author: Daniel Templeton 
Authored: Wed Nov 8 17:31:14 2017 -0800
Committer: Daniel Templeton 
Committed: Wed Nov 8 17:31:14 2017 -0800

--
 .../server/TestContainerManagerSecurity.java| 38 
 1 file changed, 24 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/49b4c0b3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
index 1cbad70..ad2f68a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
@@ -28,7 +28,9 @@ import java.util.Arrays;
 import java.util.Collection;
 import java.util.LinkedList;
 import java.util.List;
+import java.util.concurrent.TimeoutException;
 
+import com.google.common.base.Supplier;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.io.DataInputBuffer;
@@ -36,6 +38,7 @@ import org.apache.hadoop.minikdc.KerberosSecurityTestcase;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.SecretManager.InvalidToken;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.yarn.api.ContainerManagementProtocol;
 import org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesResponse;
@@ -49,6 +52,7 @@ import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.ContainerLaunchContext;
 import org.apache.hadoop.yarn.api.records.ContainerState;
+import org.apache.hadoop.yarn.api.records.ContainerStatus;
 import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.Resource;
@@ -404,27 +408,33 @@ public class TestContainerManagerSecurity extends 
KerberosSecurityTestcase {
   newContainerToken, attempt1NMToken, false).isEmpty());
   }
 
-  private void waitForContainerToFinishOnNM(ContainerId containerId) {
+  private void waitForContainerToFinishOnNM(ContainerId containerId)
+  throws TimeoutException, InterruptedException {
 Context nmContext = yarnCluster.getNodeManager(0).getNMContext();
 int interval = 4 * 60; // Max time for container token to expire.
 
-Assert.assertNotNull(nmContext.getContainers().containsKey(containerId));
-
-// Get the container first, as it may be removed from the Context
-// by asynchronous calls.
-// This was leading to a flakey test as otherwise the container could
-// be removed and end up null.
+// If the container is null, then it has already completed and been removed
+// from the Context by asynchronous calls.
 Container waitContainer = nmContext.getContainers().get(containerId);
-
-while ((interval-- > 0)
-&& !waitContainer.cloneAndGetContainerStatus()
-.getState().equals(ContainerState.COMPLETE)) {
+if (waitContainer != null) {
   try {
-LOG.info("Waiting for " + containerId + " to complete.");
-Thread.sleep(1000);
-  } catch (InterruptedException e) {
+LOG.info("Waiting for " + containerId + " to get to state " +
+ContainerState.COMPLETE);
+GenericTestUtils.waitFor(new Supplier() {
+  @Override
+  public Boolean get() {
+return ContainerState.COMPLETE.equals(
+waitContainer.cloneAndGetContainerStatus().getState());
+  }
+}, 

[50/50] [abbrv] hadoop git commit: YARN-6595. [API] Add Placement Constraints at the application level. (Arun Suresh via kkaranasos)

2017-11-13 Thread kkaranasos
YARN-6595. [API] Add Placement Constraints at the application level. (Arun 
Suresh via kkaranasos)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/26684d89
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/26684d89
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/26684d89

Branch: refs/heads/YARN-6592
Commit: 26684d89ab28ff9218caa6ea4cddebfb50c164d9
Parents: 1670b35
Author: Konstantinos Karanasos 
Authored: Mon Nov 13 15:25:24 2017 -0800
Committer: Konstantinos Karanasos 
Committed: Mon Nov 13 15:25:24 2017 -0800

--
 .../RegisterApplicationMasterRequest.java   |  42 -
 .../yarn/api/resource/PlacementConstraint.java  | 156 +++
 .../src/main/proto/yarn_protos.proto|   6 +
 .../src/main/proto/yarn_service_protos.proto|   1 +
 .../RegisterApplicationMasterRequestPBImpl.java | 106 -
 .../hadoop/yarn/api/BasePBImplRecordsTest.java  |  11 ++
 6 files changed, 313 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/26684d89/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
index 395e190..f2d537a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
@@ -18,11 +18,16 @@
 
 package org.apache.hadoop.yarn.api.protocolrecords;
 
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Stable;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.yarn.api.ApplicationMasterProtocol;
+import org.apache.hadoop.yarn.api.resource.PlacementConstraint;
 import org.apache.hadoop.yarn.util.Records;
-
 /**
  * The request sent by the {@code ApplicationMaster} to {@code ResourceManager}
  * on registration.
@@ -132,4 +137,39 @@ public abstract class RegisterApplicationMasterRequest {
   @Public
   @Stable
   public abstract void setTrackingUrl(String trackingUrl);
+
+  /**
+   * Return all Placement Constraints specified at the Application level. The
+   * mapping is from a set of allocation tags to a
+   * PlacementConstraint associated with the tags, i.e., each
+   * {@link org.apache.hadoop.yarn.api.records.SchedulingRequest} that has 
those
+   * tags will be placed taking into account the corresponding constraint.
+   *
+   * @return A map of Placement Constraints.
+   */
+  @Public
+  @Unstable
+  public Map getPlacementConstraints() {
+return new HashMap<>();
+  }
+
+  /**
+   * Set Placement Constraints applicable to the
+   * {@link org.apache.hadoop.yarn.api.records.SchedulingRequest}s
+   * of this application.
+   * The mapping is from a set of allocation tags to a
+   * PlacementConstraint associated with the tags.
+   * For example:
+   *  Map 
+   *   hb_regionserver - node_anti_affinity,
+   *   hb_regionserver, hb_master - rack_affinity,
+   *   ...
+   *  
+   * @param placementConstraints Placement Constraint Mapping.
+   */
+  @Public
+  @Unstable
+  public void setPlacementConstraints(
+  Map placementConstraints) {
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/26684d89/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
index f0e3982..b6e851a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
@@ -54,6 +54,26 @@ public class PlacementConstraint {
 return constraintExpr;
   }
 
+  

[44/50] [abbrv] hadoop git commit: YARN-7369. Improve the resource types docs

2017-11-13 Thread kkaranasos
YARN-7369. Improve the resource types docs


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/040a38dc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/040a38dc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/040a38dc

Branch: refs/heads/YARN-6592
Commit: 040a38dc493adf44e9552b8971acf36188c30152
Parents: 2e512f0
Author: Daniel Templeton 
Authored: Mon Nov 13 11:05:07 2017 -0800
Committer: Daniel Templeton 
Committed: Mon Nov 13 11:05:07 2017 -0800

--
 hadoop-project/src/site/site.xml|   2 +-
 .../src/site/markdown/ResourceModel.md  | 275 +++
 .../src/site/markdown/ResourceProfiles.md   | 116 
 3 files changed, 276 insertions(+), 117 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/040a38dc/hadoop-project/src/site/site.xml
--
diff --git a/hadoop-project/src/site/site.xml b/hadoop-project/src/site/site.xml
index 57cff9a..be48ddb 100644
--- a/hadoop-project/src/site/site.xml
+++ b/hadoop-project/src/site/site.xml
@@ -128,6 +128,7 @@
   
   
   
+  
   
   
   
@@ -143,7 +144,6 @@
   
   
   
-  
   
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/040a38dc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
new file mode 100644
index 000..75e5c92
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
@@ -0,0 +1,275 @@
+
+
+Hadoop: YARN Resource Configuration
+===
+
+Overview
+
+YARN supports an extensible resource model. By default YARN tracks CPU and
+memory for all nodes, applications, and queues, but the resource definition
+can be extended to include arbitrary "countable" resources. A countable
+resource is a resource that is consumed while a container is running, but is
+released afterwards. CPU and memory are both countable resources. Other 
examples
+include GPU resources and software licenses.
+
+In addition, YARN also supports the use of "resource profiles", which allow a
+user to specify multiple resource requests through a single profile, similar to
+Amazon Web Services Elastic Compute Cluster instance types. For example,
+"large" might mean 8 virtual cores and 16GB RAM.
+
+Configuration
+-
+
+The following configuration properties are supported. See below for details.
+
+`yarn-site.xml`
+
+| Configuration Property | Description |
+|: |: |
+| `yarn.resourcemanager.resource-profiles.enabled` | Indicates whether 
resource profiles support is enabled. Defaults to `false`. |
+
+`resource-types.xml`
+
+| Configuration Property | Value | Description |
+|: |: |: |
+| `yarn.resource-types` | Comma-separated list of additional resources. May 
not include `memory`, `memory-mb`, or `vcores` |
+| `yarn.resource-types..units` | Default unit for the specified 
resource type |
+| `yarn.resource-types..minimum` | The minimum request for the 
specified resource type |
+| `yarn.resource-types..maximum` | The maximum request for the 
specified resource type |
+
+`node­-resources.xml`
+
+| Configuration Property | Value | Description |
+|: |: |: |
+| `yarn.nodemanager.resource-type.` | The count of the specified 
resource available from the node manager |
+
+Please note that the `resource-types.xml` and `node­-resources.xml` files
+also need to be placed in the same configuration directory as `yarn-site.xml` 
if
+they are used. Alternatively, the properties may be placed into the
+`yarn-site.xml` file instead.
+
+YARN Resource Model
+---
+
+### Resource Manager
+The resource manager is the final arbiter of what resources in the cluster are
+tracked. The resource manager loads its resource definition from XML
+configuration files. For example, to define a new resource in addition to
+CPU and memory, the following property should be configured:
+
+```xml
+
+  
+yarn.resource-types
+resource1,resource2
+
+The resources to be used for scheduling. Use resource-types.xml
+to specify details about the individual resource types.
+
+  
+
+```
+
+A valid resource name must begin with a letter and contain only letters, 
numbers,
+and any of: '.', '_', or '-'. A valid resource name may also be optionally
+preceded by a name space followed by a slash. A valid name space consists of
+period-separated groups of 

[19/50] [abbrv] hadoop git commit: YARN-7388. TestAMRestart should be scheduler agnostic.

2017-11-13 Thread kkaranasos
YARN-7388. TestAMRestart should be scheduler agnostic.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a1382a18
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a1382a18
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a1382a18

Branch: refs/heads/YARN-6592
Commit: a1382a18dff8a70aa25240d6fbba6e22832a7679
Parents: 6c32dda
Author: Haibo Chen 
Authored: Thu Nov 9 10:49:50 2017 -0800
Committer: Haibo Chen 
Committed: Thu Nov 9 10:49:50 2017 -0800

--
 .../scheduler/AbstractYarnScheduler.java|  8 +
 .../scheduler/SchedulerUtils.java   | 13 +++
 .../scheduler/capacity/CapacityScheduler.java   |  6 
 .../scheduler/fair/FairScheduler.java   | 10 ++
 .../scheduler/fifo/FifoScheduler.java   | 10 ++
 .../applicationsmanager/TestAMRestart.java  | 36 +++-
 6 files changed, 59 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1382a18/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 53f43e4..7308fd8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -1362,6 +1362,14 @@ public abstract class AbstractYarnScheduler
   }
 
   /**
+   * Kill a RMContainer. This is meant to be called in tests only to simulate
+   * AM container failures.
+   * @param container the container to kill
+   */
+  @VisibleForTesting
+  public abstract void killContainer(RMContainer container);
+
+  /**
* Update internal state of the scheduler.  This can be useful for scheduler
* implementations that maintain some state that needs to be periodically
* updated; for example, metrics or queue resources.  It will be called by 
the

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1382a18/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
index c558b8d..32f5824 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
@@ -101,6 +101,19 @@ public class SchedulerUtils {
 ContainerExitStatus.ABORTED, diagnostics);
   }
 
+
+  /**
+   * Utility to create a {@link ContainerStatus} for killed containers.
+   * @param containerId {@link ContainerId} of the killed container.
+   * @param diagnostics diagnostic message
+   * @return ContainerStatus for a killed container
+   */
+  public static ContainerStatus createKilledContainerStatus(
+  ContainerId containerId, String diagnostics) {
+return createAbnormalContainerStatus(containerId,
+ContainerExitStatus.KILLED_BY_RESOURCEMANAGER, diagnostics);
+  }
+
   /**
* Utility to create a {@link ContainerStatus} during exceptional
* circumstances.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1382a18/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 

[16/50] [abbrv] hadoop git commit: YARN-7454. RMAppAttemptMetrics#getAggregateResourceUsage can NPE due to double lookup. Contributed by Jason Lowe.

2017-11-13 Thread kkaranasos
YARN-7454. RMAppAttemptMetrics#getAggregateResourceUsage can NPE due to double 
lookup. Contributed by Jason Lowe.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0a72c2f5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0a72c2f5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0a72c2f5

Branch: refs/heads/YARN-6592
Commit: 0a72c2f56c37063609de72eef1f74632890c048a
Parents: 3a3566e
Author: bibinchundatt 
Authored: Thu Nov 9 21:01:19 2017 +0530
Committer: bibinchundatt 
Committed: Thu Nov 9 21:01:19 2017 +0530

--
 .../rmapp/attempt/RMAppAttemptMetrics.java   | 15 ---
 1 file changed, 8 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0a72c2f5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
index 0982ef9..015cff7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
@@ -135,20 +135,21 @@ public class RMAppAttemptMetrics {
 // Only add in the running containers if this is the active attempt.
 RMApp rmApp = rmContext.getRMApps().get(attemptId.getApplicationId());
 if (rmApp != null) {
-  RMAppAttempt currentAttempt = 
rmContext.getRMApps().get(attemptId.getApplicationId()).getCurrentAppAttempt();
-  if (currentAttempt.getAppAttemptId().equals(attemptId)) {
+  RMAppAttempt currentAttempt = rmApp.getCurrentAppAttempt();
+  if (currentAttempt != null
+  && currentAttempt.getAppAttemptId().equals(attemptId)) {
 ApplicationResourceUsageReport appResUsageReport =
 rmContext.getScheduler().getAppResourceUsageReport(attemptId);
 if (appResUsageReport != null) {
   Map tmp = appResUsageReport.getResourceSecondsMap();
   for (Map.Entry entry : tmp.entrySet()) {
-if (resourcesUsed.containsKey(entry.getKey())) {
-  Long value = resourcesUsed.get(entry.getKey());
+Long value = resourcesUsed.get(entry.getKey());
+if (value != null) {
   value += entry.getValue();
-  resourcesUsed.put(entry.getKey(), value);
-} else{
-  resourcesUsed.put(entry.getKey(), entry.getValue());
+} else {
+  value = entry.getValue();
 }
+resourcesUsed.put(entry.getKey(), value);
   }
 }
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[03/50] [abbrv] hadoop git commit: YARN-7394. Merge code paths for Reservation/Plan queues and Auto Created queues. (Suma Shivaprasad via wangda)

2017-11-13 Thread kkaranasos
YARN-7394. Merge code paths for Reservation/Plan queues and Auto Created 
queues. (Suma Shivaprasad via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/13fa2d4e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/13fa2d4e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/13fa2d4e

Branch: refs/heads/YARN-6592
Commit: 13fa2d4e3e55a849dcd7e472750f3e0422cc2ac9
Parents: 8f214dc
Author: Wangda Tan 
Authored: Mon Nov 6 21:38:24 2017 -0800
Committer: Wangda Tan 
Committed: Mon Nov 6 21:38:24 2017 -0800

--
 .../CapacitySchedulerPlanFollower.java  |  10 +-
 .../capacity/AbstractManagedParentQueue.java| 232 +++
 .../capacity/AutoCreatedLeafQueue.java  | 129 +++
 .../scheduler/capacity/CapacityScheduler.java   |  55 +++--
 .../capacity/CapacitySchedulerQueueManager.java |   4 +-
 .../scheduler/capacity/LeafQueue.java   |   4 +
 .../scheduler/capacity/ParentQueue.java |  13 ++
 .../scheduler/capacity/PlanQueue.java   | 191 ++-
 .../scheduler/capacity/ReservationQueue.java| 122 --
 .../capacity/TestAutoCreatedLeafQueue.java  | 113 +
 .../TestCapacitySchedulerDynamicBehavior.java   |  32 +--
 .../capacity/TestReservationQueue.java  | 110 -
 12 files changed, 569 insertions(+), 446 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/13fa2d4e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java
index 551f075..2e16689 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java
@@ -28,10 +28,10 @@ import 
org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.Queue;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerDynamicEditException;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AutoCreatedLeafQueue;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.PlanQueue;
-import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ReservationQueue;
 import org.apache.hadoop.yarn.util.Clock;
 import org.apache.hadoop.yarn.util.resource.Resources;
 import org.slf4j.Logger;
@@ -92,8 +92,8 @@ public class CapacitySchedulerPlanFollower extends 
AbstractSchedulerPlanFollower
   String planQueueName, Queue queue, String currResId) {
 PlanQueue planQueue = (PlanQueue)queue;
 try {
-  ReservationQueue resQueue =
-  new ReservationQueue(cs, currResId, planQueue);
+  AutoCreatedLeafQueue resQueue =
+  new AutoCreatedLeafQueue(cs, currResId, planQueue);
   cs.addQueue(resQueue);
 } catch (SchedulerDynamicEditException e) {
   LOG.warn(
@@ -112,8 +112,8 @@ public class CapacitySchedulerPlanFollower extends 
AbstractSchedulerPlanFollower
 PlanQueue planQueue = (PlanQueue)queue;
 if (cs.getQueue(defReservationId) == null) {
   try {
-ReservationQueue defQueue =
-new ReservationQueue(cs, defReservationId, planQueue);
+AutoCreatedLeafQueue defQueue =
+new AutoCreatedLeafQueue(cs, defReservationId, planQueue);
 cs.addQueue(defQueue);
   } catch (SchedulerDynamicEditException e) {
 LOG.warn(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/13fa2d4e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractManagedParentQueue.java

[12/50] [abbrv] hadoop git commit: YARN-7166. Container REST endpoints should report resource types

2017-11-13 Thread kkaranasos
YARN-7166. Container REST endpoints should report resource types

Change-Id: If9c2fe58d4cf758bb6b6cf363dc01f35f8720987


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0de10680
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0de10680
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0de10680

Branch: refs/heads/YARN-6592
Commit: 0de10680b7e5a9dfc85173bcfd338fd3656aa57f
Parents: cb35a59
Author: Daniel Templeton 
Authored: Wed Nov 8 16:43:49 2017 -0800
Committer: Daniel Templeton 
Committed: Wed Nov 8 16:43:49 2017 -0800

--
 .../yarn/server/webapp/dao/ContainerInfo.java   | 39 +---
 1 file changed, 34 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0de10680/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/dao/ContainerInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/dao/ContainerInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/dao/ContainerInfo.java
index 1a5ee85..26a822c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/dao/ContainerInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/dao/ContainerInfo.java
@@ -18,6 +18,9 @@
 
 package org.apache.hadoop.yarn.server.webapp.dao;
 
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
 import javax.xml.bind.annotation.XmlAccessType;
 import javax.xml.bind.annotation.XmlAccessorType;
 import javax.xml.bind.annotation.XmlRootElement;
@@ -27,6 +30,8 @@ import 
org.apache.hadoop.classification.InterfaceStability.Evolving;
 
 import org.apache.hadoop.yarn.api.records.ContainerReport;
 import org.apache.hadoop.yarn.api.records.ContainerState;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.ResourceInformation;
 import org.apache.hadoop.yarn.util.Times;
 
 @Public
@@ -49,20 +54,18 @@ public class ContainerInfo {
   protected ContainerState containerState;
   protected String nodeHttpAddress;
   protected String nodeId;
+  protected Map allocatedResources;
 
   public ContainerInfo() {
 // JAXB needs this
   }
 
   public ContainerInfo(ContainerReport container) {
-containerId = container.getContainerId().toString();
-if (container.getAllocatedResource() != null) {
-  allocatedMB = container.getAllocatedResource().getMemorySize();
-  allocatedVCores = container.getAllocatedResource().getVirtualCores();
-}
 if (container.getAssignedNode() != null) {
   assignedNodeId = container.getAssignedNode().toString();
 }
+
+containerId = container.getContainerId().toString();
 priority = container.getPriority().getPriority();
 startedTime = container.getCreationTime();
 finishedTime = container.getFinishTime();
@@ -73,6 +76,22 @@ public class ContainerInfo {
 containerState = container.getContainerState();
 nodeHttpAddress = container.getNodeHttpAddress();
 nodeId = container.getAssignedNode().toString();
+
+Resource allocated = container.getAllocatedResource();
+
+if (allocated != null) {
+  allocatedMB = allocated.getMemorySize();
+  allocatedVCores = allocated.getVirtualCores();
+
+  // Now populate the allocated resources. This map will include memory
+  // and CPU, because it's where they belong. We still keep allocatedMB
+  // and allocatedVCores so that we don't break the API.
+  allocatedResources = new HashMap<>();
+
+  for (ResourceInformation info : allocated.getResources()) {
+allocatedResources.put(info.getName(), info.getValue());
+  }
+}
   }
 
   public String getContainerId() {
@@ -130,4 +149,14 @@ public class ContainerInfo {
   public String getNodeId() {
 return nodeId;
   }
+
+  /**
+   * Return a map of the allocated resources. The map key is the resource name,
+   * and the value is the resource value.
+   *
+   * @return the allocated resources map
+   */
+  public Map getAllocatedResources() {
+return Collections.unmodifiableMap(allocatedResources);
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[06/50] [abbrv] hadoop git commit: HDFS-7060. Avoid taking locks when sending heartbeats from the DataNode. Contributed by Jiandan Yang.

2017-11-13 Thread kkaranasos
HDFS-7060. Avoid taking locks when sending heartbeats from the DataNode. 
Contributed by Jiandan Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bb8a6eea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bb8a6eea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bb8a6eea

Branch: refs/heads/YARN-6592
Commit: bb8a6eea52cb1e2c3d0b7f8b49a1bab9e4255acd
Parents: 51e882d
Author: Weiwei Yang 
Authored: Wed Nov 8 10:22:13 2017 +0800
Committer: Weiwei Yang 
Committed: Wed Nov 8 10:22:13 2017 +0800

--
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 51 
 .../datanode/fsdataset/impl/FsVolumeImpl.java   | 40 +++
 2 files changed, 37 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb8a6eea/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 41c41e6..d4375cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -153,22 +153,22 @@ class FsDatasetImpl implements FsDatasetSpi 
{
   public StorageReport[] getStorageReports(String bpid)
   throws IOException {
 List reports;
-synchronized (statsLock) {
-  List curVolumes = volumes.getVolumes();
-  reports = new ArrayList<>(curVolumes.size());
-  for (FsVolumeImpl volume : curVolumes) {
-try (FsVolumeReference ref = volume.obtainReference()) {
-  StorageReport sr = new StorageReport(volume.toDatanodeStorage(),
-  false,
-  volume.getCapacity(),
-  volume.getDfsUsed(),
-  volume.getAvailable(),
-  volume.getBlockPoolUsed(bpid),
-  volume.getNonDfsUsed());
-  reports.add(sr);
-} catch (ClosedChannelException e) {
-  continue;
-}
+// Volumes are the references from a copy-on-write snapshot, so the
+// access on the volume metrics doesn't require an additional lock.
+List curVolumes = volumes.getVolumes();
+reports = new ArrayList<>(curVolumes.size());
+for (FsVolumeImpl volume : curVolumes) {
+  try (FsVolumeReference ref = volume.obtainReference()) {
+StorageReport sr = new StorageReport(volume.toDatanodeStorage(),
+false,
+volume.getCapacity(),
+volume.getDfsUsed(),
+volume.getAvailable(),
+volume.getBlockPoolUsed(bpid),
+volume.getNonDfsUsed());
+reports.add(sr);
+  } catch (ClosedChannelException e) {
+continue;
   }
 }
 
@@ -247,9 +247,6 @@ class FsDatasetImpl implements FsDatasetSpi {
 
   private final int smallBufferSize;
 
-  // Used for synchronizing access to usage stats
-  private final Object statsLock = new Object();
-
   final LocalFileSystem localFS;
 
   private boolean blockPinningEnabled;
@@ -583,9 +580,7 @@ class FsDatasetImpl implements FsDatasetSpi {
*/
   @Override // FSDatasetMBean
   public long getDfsUsed() throws IOException {
-synchronized(statsLock) {
-  return volumes.getDfsUsed();
-}
+return volumes.getDfsUsed();
   }
 
   /**
@@ -593,9 +588,7 @@ class FsDatasetImpl implements FsDatasetSpi {
*/
   @Override // FSDatasetMBean
   public long getBlockPoolUsed(String bpid) throws IOException {
-synchronized(statsLock) {
-  return volumes.getBlockPoolUsed(bpid);
-}
+return volumes.getBlockPoolUsed(bpid);
   }
   
   /**
@@ -611,9 +604,7 @@ class FsDatasetImpl implements FsDatasetSpi {
*/
   @Override // FSDatasetMBean
   public long getCapacity() throws IOException {
-synchronized(statsLock) {
-  return volumes.getCapacity();
-}
+return volumes.getCapacity();
   }
 
   /**
@@ -621,9 +612,7 @@ class FsDatasetImpl implements FsDatasetSpi {
*/
   @Override // FSDatasetMBean
   public long getRemaining() throws IOException {
-synchronized(statsLock) {
-  return volumes.getRemaining();
-}
+return volumes.getRemaining();
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb8a6eea/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
--

[20/50] [abbrv] hadoop git commit: YARN-7143. FileNotFound handling in ResourceUtils is inconsistent

2017-11-13 Thread kkaranasos
YARN-7143. FileNotFound handling in ResourceUtils is inconsistent

Change-Id: Ib1bb487e14a15edd2b5a42cf5078c5a2b295f069


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/462f6c49
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/462f6c49
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/462f6c49

Branch: refs/heads/YARN-6592
Commit: 462f6c490efd2a38a9ba639bcda47b3aa667f650
Parents: a1382a1
Author: Daniel Templeton 
Authored: Thu Nov 9 10:36:49 2017 -0800
Committer: Daniel Templeton 
Committed: Thu Nov 9 11:58:49 2017 -0800

--
 .../yarn/util/resource/ResourceUtils.java   | 56 
 1 file changed, 23 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/462f6c49/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
index c9cc27b..9c9c0ef8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
@@ -342,17 +342,14 @@ public class ResourceUtils {
 if (!initializedResources) {
   synchronized (ResourceUtils.class) {
 if (!initializedResources) {
-  if (conf == null) {
-conf = new YarnConfiguration();
-  }
-  try {
-addResourcesFileToConf(resourceFile, conf);
-  } catch (FileNotFoundException fe) {
-if (LOG.isDebugEnabled()) {
-  LOG.debug("Unable to find '" + resourceFile + "'.");
-}
+  Configuration resConf = conf;
+
+  if (resConf == null) {
+resConf = new YarnConfiguration();
   }
-  initializeResourcesMap(conf);
+
+  addResourcesFileToConf(resourceFile, resConf);
+  initializeResourcesMap(resConf);
 }
   }
 }
@@ -389,7 +386,7 @@ public class ResourceUtils {
   }
 
   private static void addResourcesFileToConf(String resourceFile,
-  Configuration conf) throws FileNotFoundException {
+  Configuration conf) {
 try {
   InputStream ris = getConfInputStream(resourceFile, conf);
   if (LOG.isDebugEnabled()) {
@@ -397,15 +394,11 @@ public class ResourceUtils {
   }
   conf.addResource(ris);
 } catch (FileNotFoundException fe) {
-  throw fe;
-} catch (IOException ie) {
+  LOG.info("Unable to find '" + resourceFile + "'.");
+} catch (IOException | YarnException ex) {
   LOG.fatal("Exception trying to read resource types configuration '"
-  + resourceFile + "'.", ie);
-  throw new YarnRuntimeException(ie);
-} catch (YarnException ye) {
-  LOG.fatal("YARN Exception trying to read resource types configuration '"
-  + resourceFile + "'.", ye);
-  throw new YarnRuntimeException(ye);
+  + resourceFile + "'.", ex);
+  throw new YarnRuntimeException(ex);
 }
   }
 
@@ -467,22 +460,19 @@ public class ResourceUtils {
   private static Map 
initializeNodeResourceInformation(
   Configuration conf) {
 Map nodeResources = new HashMap<>();
-try {
-  addResourcesFileToConf(
-  YarnConfiguration.NODE_RESOURCES_CONFIGURATION_FILE, conf);
-  for (Map.Entry entry : conf) {
-String key = entry.getKey();
-String value = entry.getValue();
-if (key.startsWith(YarnConfiguration.NM_RESOURCES_PREFIX)) {
-  addResourceInformation(key, value, nodeResources);
-}
-  }
-} catch (FileNotFoundException fe) {
-  if (LOG.isDebugEnabled()) {
-LOG.debug("Couldn't find node resources file: "
-+ YarnConfiguration.NODE_RESOURCES_CONFIGURATION_FILE);
+
+addResourcesFileToConf(YarnConfiguration.NODE_RESOURCES_CONFIGURATION_FILE,
+conf);
+
+for (Map.Entry entry : conf) {
+  String key = entry.getKey();
+  String value = entry.getValue();
+
+  if (key.startsWith(YarnConfiguration.NM_RESOURCES_PREFIX)) {
+addResourceInformation(key, value, nodeResources);
   }
 }
+
 return nodeResources;
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional 

hadoop git commit: YARN-6078. Containers stuck in Localizing state. Contributed by Billie Rinaldi.

2017-11-13 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 7b3cd1013 -> ef212b855


YARN-6078. Containers stuck in Localizing state. Contributed by Billie Rinaldi.

(cherry picked from commit e14f03dfbf078de63126a1e882261081b9ec6778)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ef212b85
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ef212b85
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ef212b85

Branch: refs/heads/branch-3.0
Commit: ef212b8550bd607e9e166a99f76aae4558729464
Parents: 7b3cd10
Author: Junping Du 
Authored: Mon Nov 13 15:27:37 2017 -0800
Committer: Junping Du 
Committed: Mon Nov 13 15:28:21 2017 -0800

--
 .../localizer/ResourceLocalizationService.java  |  30 
 .../TestResourceLocalizationService.java| 144 +++
 2 files changed, 174 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef212b85/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
index 29fc747..17aa7d9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
@@ -74,6 +74,7 @@ import org.apache.hadoop.service.CompositeService;
 import org.apache.hadoop.util.DiskChecker;
 import org.apache.hadoop.util.DiskValidator;
 import org.apache.hadoop.util.DiskValidatorFactory;
+import org.apache.hadoop.util.Shell;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.concurrent.HadoopExecutors;
 import org.apache.hadoop.util.concurrent.HadoopScheduledThreadPoolExecutor;
@@ -808,6 +809,7 @@ public class ResourceLocalizationService extends 
CompositeService
   return; // ignore; already gone
 }
 privLocalizers.remove(locId);
+LOG.info("Interrupting localizer for " + locId);
 localizer.interrupt();
   }
 }
@@ -1189,6 +1191,34 @@ public class ResourceLocalizationService extends 
CompositeService
 }
 
 @Override
+public void interrupt() {
+  boolean destroyedShell = false;
+  try {
+for (Shell shell : Shell.getAllShells()) {
+  try {
+if (shell.getWaitingThread() != null &&
+shell.getWaitingThread().equals(this) &&
+shell.getProcess() != null &&
+shell.getProcess().isAlive()) {
+  LOG.info("Destroying localization shell process for " +
+  localizerId);
+  shell.getProcess().destroy();
+  destroyedShell = true;
+  break;
+}
+  } catch (Exception e) {
+LOG.warn("Failed to destroy localization shell process for " +
+localizerId, e);
+  }
+}
+  } finally {
+if (!destroyedShell) {
+  super.interrupt();
+}
+  }
+}
+
+@Override
 @SuppressWarnings("unchecked") // dispatcher not typed
 public void run() {
   Path nmPrivateCTokensPath = null;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef212b85/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
index d863c6a..c180545 100644
--- 

hadoop git commit: YARN-6078. Containers stuck in Localizing state. Contributed by Billie Rinaldi.

2017-11-13 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/trunk f871b7541 -> e14f03dfb


YARN-6078. Containers stuck in Localizing state. Contributed by Billie Rinaldi.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e14f03df
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e14f03df
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e14f03df

Branch: refs/heads/trunk
Commit: e14f03dfbf078de63126a1e882261081b9ec6778
Parents: f871b75
Author: Junping Du 
Authored: Mon Nov 13 15:27:37 2017 -0800
Committer: Junping Du 
Committed: Mon Nov 13 15:27:37 2017 -0800

--
 .../localizer/ResourceLocalizationService.java  |  30 
 .../TestResourceLocalizationService.java| 144 +++
 2 files changed, 174 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e14f03df/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
index 29fc747..17aa7d9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
@@ -74,6 +74,7 @@ import org.apache.hadoop.service.CompositeService;
 import org.apache.hadoop.util.DiskChecker;
 import org.apache.hadoop.util.DiskValidator;
 import org.apache.hadoop.util.DiskValidatorFactory;
+import org.apache.hadoop.util.Shell;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.concurrent.HadoopExecutors;
 import org.apache.hadoop.util.concurrent.HadoopScheduledThreadPoolExecutor;
@@ -808,6 +809,7 @@ public class ResourceLocalizationService extends 
CompositeService
   return; // ignore; already gone
 }
 privLocalizers.remove(locId);
+LOG.info("Interrupting localizer for " + locId);
 localizer.interrupt();
   }
 }
@@ -1189,6 +1191,34 @@ public class ResourceLocalizationService extends 
CompositeService
 }
 
 @Override
+public void interrupt() {
+  boolean destroyedShell = false;
+  try {
+for (Shell shell : Shell.getAllShells()) {
+  try {
+if (shell.getWaitingThread() != null &&
+shell.getWaitingThread().equals(this) &&
+shell.getProcess() != null &&
+shell.getProcess().isAlive()) {
+  LOG.info("Destroying localization shell process for " +
+  localizerId);
+  shell.getProcess().destroy();
+  destroyedShell = true;
+  break;
+}
+  } catch (Exception e) {
+LOG.warn("Failed to destroy localization shell process for " +
+localizerId, e);
+  }
+}
+  } finally {
+if (!destroyedShell) {
+  super.interrupt();
+}
+  }
+}
+
+@Override
 @SuppressWarnings("unchecked") // dispatcher not typed
 public void run() {
   Path nmPrivateCTokensPath = null;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e14f03df/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
index d863c6a..c180545 100644
--- 

hadoop git commit: YARN-6909. Use LightWeightedResource when number of resource types more than two. (Sunil G via wangda)

2017-11-13 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 cb48eea92 -> 7b3cd1013


YARN-6909. Use LightWeightedResource when number of resource types more than 
two. (Sunil G via wangda)

Change-Id: I90e021c5dea7abd9ec6bd73b2287c8adebe14595


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7b3cd101
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7b3cd101
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7b3cd101

Branch: refs/heads/branch-3.0
Commit: 7b3cd10131ede9172c7682e2116aa76f05590a99
Parents: cb48eea
Author: Wangda Tan 
Authored: Thu Nov 9 14:51:15 2017 -0800
Committer: Andrew Wang 
Committed: Mon Nov 13 14:45:40 2017 -0800

--
 .../hadoop/yarn/api/records/Resource.java   | 48 ++
 .../api/records/impl/LightWeightResource.java   | 94 +---
 .../api/records/impl/pb/ResourcePBImpl.java | 88 +-
 .../scheduler/ClusterNodeTracker.java   |  2 +-
 4 files changed, 141 insertions(+), 91 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b3cd101/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
index 14131cb..e863d68 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
@@ -28,9 +28,9 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
 import org.apache.hadoop.classification.InterfaceStability.Stable;
 import org.apache.hadoop.yarn.api.ApplicationMasterProtocol;
+import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes;
 import org.apache.hadoop.yarn.api.records.impl.LightWeightResource;
 import org.apache.hadoop.yarn.exceptions.ResourceNotFoundException;
-import org.apache.hadoop.yarn.util.Records;
 import org.apache.hadoop.yarn.util.resource.ResourceUtils;
 
 /**
@@ -75,34 +75,27 @@ public abstract class Resource implements 
Comparable {
   @Public
   @Stable
   public static Resource newInstance(int memory, int vCores) {
-if (ResourceUtils.getNumberOfKnownResourceTypes() > 2) {
-  Resource ret = Records.newRecord(Resource.class);
-  ret.setMemorySize(memory);
-  ret.setVirtualCores(vCores);
-  return ret;
-}
 return new LightWeightResource(memory, vCores);
   }
 
   @Public
   @Stable
   public static Resource newInstance(long memory, int vCores) {
-if (ResourceUtils.getNumberOfKnownResourceTypes() > 2) {
-  Resource ret = Records.newRecord(Resource.class);
-  ret.setMemorySize(memory);
-  ret.setVirtualCores(vCores);
-  return ret;
-}
 return new LightWeightResource(memory, vCores);
   }
 
   @InterfaceAudience.Private
   @InterfaceStability.Unstable
   public static Resource newInstance(Resource resource) {
-Resource ret = Resource.newInstance(resource.getMemorySize(),
-resource.getVirtualCores());
-if (ResourceUtils.getNumberOfKnownResourceTypes() > 2) {
-  Resource.copy(resource, ret);
+Resource ret;
+int numberOfKnownResourceTypes = ResourceUtils
+.getNumberOfKnownResourceTypes();
+if (numberOfKnownResourceTypes > 2) {
+  ret = new LightWeightResource(resource.getMemorySize(),
+  resource.getVirtualCores(), resource.getResources());
+} else {
+  ret = new LightWeightResource(resource.getMemorySize(),
+  resource.getVirtualCores());
 }
 return ret;
   }
@@ -411,7 +404,7 @@ public abstract class Resource implements 
Comparable {
 int arrLenOther = otherResources.length;
 
 // compare memory and vcores first(in that order) to preserve
-// existing behaviour
+// existing behavior.
 for (int i = 0; i < arrLenThis; i++) {
   ResourceInformation otherEntry;
   try {
@@ -483,4 +476,23 @@ public abstract class Resource implements 
Comparable {
 }
 return Long.valueOf(value).intValue();
   }
+
+  /**
+   * Create ResourceInformation with basic fields.
+   * @param name Resource Type Name
+   * @param unit Default unit of provided resource type
+   * @param value Value associated with giveb resource
+   * @return ResourceInformation object
+   */
+  protected static ResourceInformation newDefaultInformation(String name,
+  String unit, long value) {
+ResourceInformation 

hadoop git commit: HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)

2017-11-13 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9.0 9a89a3e0b -> 756ebc839


HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)

(cherry picked from commit f871b7541a5375eb117eafb9a091e4f59401231f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/756ebc83
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/756ebc83
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/756ebc83

Branch: refs/heads/branch-2.9.0
Commit: 756ebc8394e473ac25feac05fa493f6d612e6c50
Parents: 9a89a3e
Author: Arun Suresh 
Authored: Mon Nov 13 14:37:36 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 14:39:36 2017 -0800

--
 LICENSE.txt | 25 +
 1 file changed, 25 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/756ebc83/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 25c3b30..23c63be 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -665,6 +665,31 @@ 
hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery
 Apache HBase - Server which contains JQuery minified javascript library 
version 1.8.3
 Microsoft JDBC Driver for SQLServer - version 6.2.1.jre7
+
+
+MIT License
+
+Copyright (c) 2003-2017 Optimatika
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+For:
 oj! Algorithms - version 43.0
 

 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)

2017-11-13 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 f671c22e3 -> f894eefec


HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)

(cherry picked from commit f871b7541a5375eb117eafb9a091e4f59401231f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f894eefe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f894eefe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f894eefe

Branch: refs/heads/branch-2
Commit: f894eefec1c4c3b13ccd5becb4b60d9c4926
Parents: f671c22
Author: Arun Suresh 
Authored: Mon Nov 13 14:37:36 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 14:39:07 2017 -0800

--
 LICENSE.txt | 25 +
 1 file changed, 25 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f894eefe/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 25c3b30..23c63be 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -665,6 +665,31 @@ 
hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery
 Apache HBase - Server which contains JQuery minified javascript library 
version 1.8.3
 Microsoft JDBC Driver for SQLServer - version 6.2.1.jre7
+
+
+MIT License
+
+Copyright (c) 2003-2017 Optimatika
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+For:
 oj! Algorithms - version 43.0
 

 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)

2017-11-13 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 ed0658a57 -> 12bd1aaac


HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)

(cherry picked from commit f871b7541a5375eb117eafb9a091e4f59401231f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/12bd1aaa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/12bd1aaa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/12bd1aaa

Branch: refs/heads/branch-2.9
Commit: 12bd1aaac5b8862ea7c8ff4e47e61f34f379a327
Parents: ed0658a
Author: Arun Suresh 
Authored: Mon Nov 13 14:37:36 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 14:38:32 2017 -0800

--
 LICENSE.txt | 25 +
 1 file changed, 25 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/12bd1aaa/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 25c3b30..23c63be 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -665,6 +665,31 @@ 
hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery
 Apache HBase - Server which contains JQuery minified javascript library 
version 1.8.3
 Microsoft JDBC Driver for SQLServer - version 6.2.1.jre7
+
+
+MIT License
+
+Copyright (c) 2003-2017 Optimatika
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+For:
 oj! Algorithms - version 43.0
 

 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)

2017-11-13 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 f6de2b741 -> cb48eea92


HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)

(cherry picked from commit f871b7541a5375eb117eafb9a091e4f59401231f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cb48eea9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cb48eea9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cb48eea9

Branch: refs/heads/branch-3.0
Commit: cb48eea922556500577dcf85ffeeea9edc33e895
Parents: f6de2b7
Author: Arun Suresh 
Authored: Mon Nov 13 14:37:36 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 14:38:05 2017 -0800

--
 LICENSE.txt | 25 +
 1 file changed, 25 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb48eea9/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 67472b6..54ad821 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -699,6 +699,31 @@ 
hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery
 Apache HBase - Server which contains JQuery minified javascript library 
version 1.8.3
 Microsoft JDBC Driver for SQLServer - version 6.2.1.jre7
+
+
+MIT License
+
+Copyright (c) 2003-2017 Optimatika
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+For:
 oj! Algorithms - version 43.0
 

 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)

2017-11-13 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/trunk b07e68b02 -> f871b7541


HADOOP-15036. Update LICENSE.txt for HADOOP-14840. (asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f871b754
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f871b754
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f871b754

Branch: refs/heads/trunk
Commit: f871b7541a5375eb117eafb9a091e4f59401231f
Parents: b07e68b
Author: Arun Suresh 
Authored: Mon Nov 13 14:37:36 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 14:37:36 2017 -0800

--
 LICENSE.txt | 25 +
 1 file changed, 25 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f871b754/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index b0cef03..447c609 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -699,6 +699,31 @@ 
hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery
 Apache HBase - Server which contains JQuery minified javascript library 
version 1.8.3
 Microsoft JDBC Driver for SQLServer - version 6.2.1.jre7
+
+
+MIT License
+
+Copyright (c) 2003-2017 Optimatika
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+For:
 oj! Algorithms - version 43.0
 

 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: Addendum patch for Configuration fix. (Jason Lowe via asuresh)

2017-11-13 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 c2f479c24 -> f6de2b741


Addendum patch for Configuration fix. (Jason Lowe via asuresh)

(cherry picked from commit b07e68b02a34d272114dda4194992a847928aef8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f6de2b74
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f6de2b74
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f6de2b74

Branch: refs/heads/branch-3.0
Commit: f6de2b741ed75aecdd5244ce4a360b9780b53251
Parents: c2f479c
Author: Arun Suresh 
Authored: Mon Nov 13 14:03:50 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 14:05:41 2017 -0800

--
 .../src/main/java/org/apache/hadoop/conf/Configuration.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f6de2b74/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 6c061a4..18e6e6e 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -2936,7 +2936,8 @@ public class Configuration implements 
Iterable>,
 // xi:include are treated as inline and retain current source
 URL include = getResource(confInclude);
 if (include != null) {
-  Resource classpathResource = new Resource(include, name);
+  Resource classpathResource = new Resource(include, name,
+  wrapper.isParserRestricted());
   loadResource(properties, classpathResource, quiet);
 } else {
   URL url;
@@ -2957,7 +2958,8 @@ public class Configuration implements 
Iterable>,
 }
 url = href.toURI().toURL();
   }
-  Resource uriResource = new Resource(url, name);
+  Resource uriResource = new Resource(url, name,
+  wrapper.isParserRestricted());
   loadResource(properties, uriResource, quiet);
 }
 break;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: Addendum patch for Configuration fix. (Jason Lowe via asuresh)

2017-11-13 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4908a8970 -> b07e68b02


Addendum patch for Configuration fix. (Jason Lowe via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b07e68b0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b07e68b0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b07e68b0

Branch: refs/heads/trunk
Commit: b07e68b02a34d272114dda4194992a847928aef8
Parents: 4908a89
Author: Arun Suresh 
Authored: Mon Nov 13 14:03:50 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 14:03:50 2017 -0800

--
 .../src/main/java/org/apache/hadoop/conf/Configuration.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b07e68b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index dfbeec7..fce2194 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -2962,7 +2962,8 @@ public class Configuration implements 
Iterable>,
 // xi:include are treated as inline and retain current source
 URL include = getResource(confInclude);
 if (include != null) {
-  Resource classpathResource = new Resource(include, name);
+  Resource classpathResource = new Resource(include, name,
+  wrapper.isParserRestricted());
   loadResource(properties, classpathResource, quiet);
 } else {
   URL url;
@@ -2983,7 +2984,8 @@ public class Configuration implements 
Iterable>,
 }
 url = href.toURI().toURL();
   }
-  Resource uriResource = new Resource(url, name);
+  Resource uriResource = new Resource(url, name,
+  wrapper.isParserRestricted());
   loadResource(properties, uriResource, quiet);
 }
 break;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/2] hadoop git commit: HDFS-12705. WebHdfsFileSystem exceptions should retain the caused by exception. Contributed by Hanisha Koneru.

2017-11-13 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 4e847d63a -> f671c22e3
  refs/heads/trunk 040a38dc4 -> 4908a8970


HDFS-12705. WebHdfsFileSystem exceptions should retain the caused by exception. 
Contributed by Hanisha Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4908a897
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4908a897
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4908a897

Branch: refs/heads/trunk
Commit: 4908a8970eaf500642a9d8427e322032c1ec047a
Parents: 040a38d
Author: Arpit Agarwal 
Authored: Mon Nov 13 11:30:39 2017 -0800
Committer: Arpit Agarwal 
Committed: Mon Nov 13 11:30:39 2017 -0800

--
 .../hadoop/hdfs/web/WebHdfsFileSystem.java  |  1 +
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java | 59 
 2 files changed, 60 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4908a897/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 34f5d6e..c1aef49 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -780,6 +780,7 @@ public class WebHdfsFileSystem extends FileSystem
   try {
 IOException newIoe = ioe.getClass().getConstructor(String.class)
 .newInstance(node + ": " + ioe.getMessage());
+newIoe.initCause(ioe.getCause());
 newIoe.setStackTrace(ioe.getStackTrace());
 ioe = newIoe;
   } catch (NoSuchMethodException | SecurityException 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4908a897/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
index 3ee8ad0..500ec0a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
@@ -1452,4 +1452,63 @@ public class TestWebHDFS {
   }
 }
   }
+
+  /**
+   * Tests that {@link WebHdfsFileSystem.AbstractRunner} propagates original
+   * exception's stacktrace and cause during runWithRetry attempts.
+   * @throws Exception
+   */
+  @Test
+  public void testExceptionPropogationInAbstractRunner() throws Exception{
+final Configuration conf = WebHdfsTestUtil.createConf();
+final Path dir = new Path("/testExceptionPropogationInAbstractRunner");
+
+conf.setBoolean(HdfsClientConfigKeys.Retry.POLICY_ENABLED_KEY, true);
+
+final short numDatanodes = 1;
+final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+.numDataNodes(numDatanodes)
+.build();
+try {
+  cluster.waitActive();
+  final FileSystem fs = WebHdfsTestUtil
+  .getWebHdfsFileSystem(conf, WebHdfsConstants.WEBHDFS_SCHEME);
+
+  //create a file
+  final long length = 1L << 20;
+  final Path file1 = new Path(dir, "testFile");
+
+  DFSTestUtil.createFile(fs, file1, length, numDatanodes, 20120406L);
+
+  //get file status and check that it was written properly.
+  final FileStatus s1 = fs.getFileStatus(file1);
+  assertEquals("Write failed for file " + file1, length, s1.getLen());
+
+  FSDataInputStream in = fs.open(file1);
+  in.read(); // Connection is made only when the first read() occurs.
+  final WebHdfsInputStream webIn =
+  (WebHdfsInputStream)(in.getWrappedStream());
+
+  final String msg = "Throwing dummy exception";
+  IOException ioe = new IOException(msg, new DummyThrowable());
+
+  WebHdfsFileSystem.ReadRunner readRunner = spy(webIn.getReadRunner());
+  doThrow(ioe).when(readRunner).getResponse(any(HttpURLConnection.class));
+
+  webIn.setReadRunner(readRunner);
+
+  try {
+webIn.read();
+fail("Read should have thrown IOException.");
+  } catch (IOException e) {
+assertTrue(e.getMessage().contains(msg));
+assertTrue(e.getCause() instanceof DummyThrowable);
+  }
+} finally {
+  cluster.shutdown();
+}
+  }
+
+  final static class DummyThrowable 

[2/2] hadoop git commit: HDFS-12705. WebHdfsFileSystem exceptions should retain the caused by exception. Contributed by Hanisha Koneru.

2017-11-13 Thread arp
HDFS-12705. WebHdfsFileSystem exceptions should retain the caused by exception. 
Contributed by Hanisha Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f671c22e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f671c22e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f671c22e

Branch: refs/heads/branch-2
Commit: f671c22e3e4558fdec7d79844e250891b3a765d1
Parents: 4e847d6
Author: Arpit Agarwal 
Authored: Mon Nov 13 11:30:39 2017 -0800
Committer: Arpit Agarwal 
Committed: Mon Nov 13 13:56:13 2017 -0800

--
 .../hadoop/hdfs/web/WebHdfsFileSystem.java  |  1 +
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java | 59 
 2 files changed, 60 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f671c22e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 3f9839a..587e1fc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -788,6 +788,7 @@ public class WebHdfsFileSystem extends FileSystem
   try {
 IOException newIoe = ioe.getClass().getConstructor(String.class)
 .newInstance(node + ": " + ioe.getMessage());
+newIoe.initCause(ioe.getCause());
 newIoe.setStackTrace(ioe.getStackTrace());
 ioe = newIoe;
   } catch (NoSuchMethodException | SecurityException 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f671c22e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
index 588fd0b..47a5584 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
@@ -1652,4 +1652,63 @@ public class TestWebHDFS {
   }
 }
   }
+
+  /**
+   * Tests that {@link WebHdfsFileSystem.AbstractRunner} propagates original
+   * exception's stacktrace and cause during runWithRetry attempts.
+   * @throws Exception
+   */
+  @Test
+  public void testExceptionPropogationInAbstractRunner() throws Exception{
+final Configuration conf = WebHdfsTestUtil.createConf();
+final Path dir = new Path("/testExceptionPropogationInAbstractRunner");
+
+conf.setBoolean(HdfsClientConfigKeys.Retry.POLICY_ENABLED_KEY, true);
+
+final short numDatanodes = 1;
+final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+.numDataNodes(numDatanodes)
+.build();
+try {
+  cluster.waitActive();
+  final FileSystem fs = WebHdfsTestUtil
+  .getWebHdfsFileSystem(conf, WebHdfsConstants.WEBHDFS_SCHEME);
+
+  //create a file
+  final long length = 1L << 20;
+  final Path file1 = new Path(dir, "testFile");
+
+  DFSTestUtil.createFile(fs, file1, length, numDatanodes, 20120406L);
+
+  //get file status and check that it was written properly.
+  final FileStatus s1 = fs.getFileStatus(file1);
+  assertEquals("Write failed for file " + file1, length, s1.getLen());
+
+  FSDataInputStream in = fs.open(file1);
+  in.read(); // Connection is made only when the first read() occurs.
+  final WebHdfsInputStream webIn =
+  (WebHdfsInputStream)(in.getWrappedStream());
+
+  final String msg = "Throwing dummy exception";
+  IOException ioe = new IOException(msg, new DummyThrowable());
+
+  WebHdfsFileSystem.ReadRunner readRunner = spy(webIn.getReadRunner());
+  doThrow(ioe).when(readRunner).getResponse(any(HttpURLConnection.class));
+
+  webIn.setReadRunner(readRunner);
+
+  try {
+webIn.read();
+fail("Read should have thrown IOException.");
+  } catch (IOException e) {
+assertTrue(e.getMessage().contains(msg));
+assertTrue(e.getCause() instanceof DummyThrowable);
+  }
+} finally {
+  cluster.shutdown();
+}
+  }
+
+  final static class DummyThrowable extends Throwable {
+  }
 }


-
To unsubscribe, 

hadoop git commit: Addendum patch for Configuration fix. (Jason Lowe via asuresh)

2017-11-13 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9.0 1eb05c1dd -> 9a89a3e0b


Addendum patch for Configuration fix. (Jason Lowe via asuresh)

(cherry picked from commit 4e847d63a39268d8a34e0f22ddb5e40c5ef71e3a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9a89a3e0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9a89a3e0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9a89a3e0

Branch: refs/heads/branch-2.9.0
Commit: 9a89a3e0b09a2eae6aabadd510fa48c5e18c0dcc
Parents: 1eb05c1
Author: Arun Suresh 
Authored: Mon Nov 13 13:49:08 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 13:51:13 2017 -0800

--
 .../src/main/java/org/apache/hadoop/conf/Configuration.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a89a3e0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 2532570..283308b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -2852,7 +2852,8 @@ public class Configuration implements 
Iterable>,
 // xi:include are treated as inline and retain current source
 URL include = getResource(confInclude);
 if (include != null) {
-  Resource classpathResource = new Resource(include, name);
+  Resource classpathResource = new Resource(include, name,
+  wrapper.isParserRestricted());
   loadResource(properties, classpathResource, quiet);
 } else {
   URL url;
@@ -2873,7 +2874,8 @@ public class Configuration implements 
Iterable>,
 }
 url = href.toURI().toURL();
   }
-  Resource uriResource = new Resource(url, name);
+  Resource uriResource = new Resource(url, name,
+  wrapper.isParserRestricted());
   loadResource(properties, uriResource, quiet);
 }
 break;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: Addendum patch for Configuration fix. (Jason Lowe via asuresh)

2017-11-13 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 238a70c6c -> ed0658a57


Addendum patch for Configuration fix. (Jason Lowe via asuresh)

(cherry picked from commit 4e847d63a39268d8a34e0f22ddb5e40c5ef71e3a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ed0658a5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ed0658a5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ed0658a5

Branch: refs/heads/branch-2.9
Commit: ed0658a57903cd8abdcb7caaf54d7bf3e1f618ab
Parents: 238a70c
Author: Arun Suresh 
Authored: Mon Nov 13 13:49:08 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 13:50:44 2017 -0800

--
 .../src/main/java/org/apache/hadoop/conf/Configuration.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ed0658a5/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 2532570..283308b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -2852,7 +2852,8 @@ public class Configuration implements 
Iterable>,
 // xi:include are treated as inline and retain current source
 URL include = getResource(confInclude);
 if (include != null) {
-  Resource classpathResource = new Resource(include, name);
+  Resource classpathResource = new Resource(include, name,
+  wrapper.isParserRestricted());
   loadResource(properties, classpathResource, quiet);
 } else {
   URL url;
@@ -2873,7 +2874,8 @@ public class Configuration implements 
Iterable>,
 }
 url = href.toURI().toURL();
   }
-  Resource uriResource = new Resource(url, name);
+  Resource uriResource = new Resource(url, name,
+  wrapper.isParserRestricted());
   loadResource(properties, uriResource, quiet);
 }
 break;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: Addendum patch for Configuration fix. (Jason Lowe via asuresh)

2017-11-13 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 30ab9b6ae -> 4e847d63a


Addendum patch for Configuration fix. (Jason Lowe via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4e847d63
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4e847d63
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4e847d63

Branch: refs/heads/branch-2
Commit: 4e847d63a39268d8a34e0f22ddb5e40c5ef71e3a
Parents: 30ab9b6
Author: Arun Suresh 
Authored: Mon Nov 13 13:49:08 2017 -0800
Committer: Arun Suresh 
Committed: Mon Nov 13 13:49:08 2017 -0800

--
 .../src/main/java/org/apache/hadoop/conf/Configuration.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e847d63/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 2532570..283308b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -2852,7 +2852,8 @@ public class Configuration implements 
Iterable>,
 // xi:include are treated as inline and retain current source
 URL include = getResource(confInclude);
 if (include != null) {
-  Resource classpathResource = new Resource(include, name);
+  Resource classpathResource = new Resource(include, name,
+  wrapper.isParserRestricted());
   loadResource(properties, classpathResource, quiet);
 } else {
   URL url;
@@ -2873,7 +2874,8 @@ public class Configuration implements 
Iterable>,
 }
 url = href.toURI().toURL();
   }
-  Resource uriResource = new Resource(url, name);
+  Resource uriResource = new Resource(url, name,
+  wrapper.isParserRestricted());
   loadResource(properties, uriResource, quiet);
 }
 break;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-7369. Improve the resource types docs

2017-11-13 Thread templedf
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 5b55a74ba -> c2f479c24


YARN-7369. Improve the resource types docs


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c2f479c2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c2f479c2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c2f479c2

Branch: refs/heads/branch-3.0
Commit: c2f479c24842fe6f73aaf3786f8732aa97e7ab49
Parents: 5b55a74
Author: Daniel Templeton 
Authored: Mon Nov 13 11:06:04 2017 -0800
Committer: Daniel Templeton 
Committed: Mon Nov 13 11:08:58 2017 -0800

--
 hadoop-project/src/site/site.xml|   2 +-
 .../src/site/markdown/ResourceModel.md  | 217 +++
 .../src/site/markdown/ResourceProfiles.md   |  79 ---
 3 files changed, 218 insertions(+), 80 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c2f479c2/hadoop-project/src/site/site.xml
--
diff --git a/hadoop-project/src/site/site.xml b/hadoop-project/src/site/site.xml
index 2aa1da7..35aa822 100644
--- a/hadoop-project/src/site/site.xml
+++ b/hadoop-project/src/site/site.xml
@@ -128,6 +128,7 @@
   
   
   
+  
   
   
   
@@ -144,7 +145,6 @@
   
   
   
-  
 
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c2f479c2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
new file mode 100644
index 000..ce968ce
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
@@ -0,0 +1,217 @@
+
+
+Hadoop: YARN Resource Configuration
+===
+
+Overview
+
+YARN supports an extensible resource model. By default YARN tracks CPU and
+memory for all nodes, applications, and queues, but the resource definition
+can be extended to include arbitrary "countable" resources. A countable
+resource is a resource that is consumed while a container is running, but is
+released afterwards. CPU and memory are both countable resources. Other 
examples
+include GPU resources and software licenses.
+
+Configuration
+-
+
+The following configuration properties are supported. See below for details.
+
+`resource-types.xml`
+
+| Configuration Property | Value | Description |
+|: |: |: |
+| `yarn.resource-types` | Comma-separated list of additional resources. May 
not include `memory`, `memory-mb`, or `vcores` |
+| `yarn.resource-types..units` | Default unit for the specified 
resource type |
+| `yarn.resource-types..minimum` | The minimum request for the 
specified resource type |
+| `yarn.resource-types..maximum` | The maximum request for the 
specified resource type |
+
+`node­-resources.xml`
+
+| Configuration Property | Value | Description |
+|: |: |: |
+| `yarn.nodemanager.resource-type.` | The count of the specified 
resource available from the node manager |
+
+Please note that the `resource-types.xml` and `node­-resources.xml` files
+also need to be placed in the same configuration directory as `yarn-site.xml` 
if
+they are used. Alternatively, the properties may be placed into the
+`yarn-site.xml` file instead.
+
+YARN Resource Model
+---
+
+### Resource Manager
+The resource manager is the final arbiter of what resources in the cluster are
+tracked. The resource manager loads its resource definition from XML
+configuration files. For example, to define a new resource in addition to
+CPU and memory, the following property should be configured:
+
+```xml
+
+  
+yarn.resource-types
+resource1,resource2
+
+The resources to be used for scheduling. Use resource-types.xml
+to specify details about the individual resource types.
+
+  
+
+```
+
+A valid resource name must begin with a letter and contain only letters, 
numbers,
+and any of: '.', '_', or '-'. A valid resource name may also be optionally
+preceded by a name space followed by a slash. A valid name space consists of
+period-separated groups of letters, numbers, and dashes. For example, the
+following are valid resource names:
+
+* myresource
+* my_resource
+* My-Resource01
+* com.acme/myresource
+
+The following are examples of invalid resource names:
+
+* 10myresource
+* my resource
+* com/acme/myresource
+* $NS/myresource
+* -none-/myresource
+
+For each new resource type defined an optional unit property can be added to
+set the default unit for the 

hadoop git commit: YARN-7369. Improve the resource types docs

2017-11-13 Thread templedf
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2e512f016 -> 040a38dc4


YARN-7369. Improve the resource types docs


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/040a38dc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/040a38dc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/040a38dc

Branch: refs/heads/trunk
Commit: 040a38dc493adf44e9552b8971acf36188c30152
Parents: 2e512f0
Author: Daniel Templeton 
Authored: Mon Nov 13 11:05:07 2017 -0800
Committer: Daniel Templeton 
Committed: Mon Nov 13 11:05:07 2017 -0800

--
 hadoop-project/src/site/site.xml|   2 +-
 .../src/site/markdown/ResourceModel.md  | 275 +++
 .../src/site/markdown/ResourceProfiles.md   | 116 
 3 files changed, 276 insertions(+), 117 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/040a38dc/hadoop-project/src/site/site.xml
--
diff --git a/hadoop-project/src/site/site.xml b/hadoop-project/src/site/site.xml
index 57cff9a..be48ddb 100644
--- a/hadoop-project/src/site/site.xml
+++ b/hadoop-project/src/site/site.xml
@@ -128,6 +128,7 @@
   
   
   
+  
   
   
   
@@ -143,7 +144,6 @@
   
   
   
-  
   
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/040a38dc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
new file mode 100644
index 000..75e5c92
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceModel.md
@@ -0,0 +1,275 @@
+
+
+Hadoop: YARN Resource Configuration
+===
+
+Overview
+
+YARN supports an extensible resource model. By default YARN tracks CPU and
+memory for all nodes, applications, and queues, but the resource definition
+can be extended to include arbitrary "countable" resources. A countable
+resource is a resource that is consumed while a container is running, but is
+released afterwards. CPU and memory are both countable resources. Other 
examples
+include GPU resources and software licenses.
+
+In addition, YARN also supports the use of "resource profiles", which allow a
+user to specify multiple resource requests through a single profile, similar to
+Amazon Web Services Elastic Compute Cluster instance types. For example,
+"large" might mean 8 virtual cores and 16GB RAM.
+
+Configuration
+-
+
+The following configuration properties are supported. See below for details.
+
+`yarn-site.xml`
+
+| Configuration Property | Description |
+|: |: |
+| `yarn.resourcemanager.resource-profiles.enabled` | Indicates whether 
resource profiles support is enabled. Defaults to `false`. |
+
+`resource-types.xml`
+
+| Configuration Property | Value | Description |
+|: |: |: |
+| `yarn.resource-types` | Comma-separated list of additional resources. May 
not include `memory`, `memory-mb`, or `vcores` |
+| `yarn.resource-types..units` | Default unit for the specified 
resource type |
+| `yarn.resource-types..minimum` | The minimum request for the 
specified resource type |
+| `yarn.resource-types..maximum` | The maximum request for the 
specified resource type |
+
+`node­-resources.xml`
+
+| Configuration Property | Value | Description |
+|: |: |: |
+| `yarn.nodemanager.resource-type.` | The count of the specified 
resource available from the node manager |
+
+Please note that the `resource-types.xml` and `node­-resources.xml` files
+also need to be placed in the same configuration directory as `yarn-site.xml` 
if
+they are used. Alternatively, the properties may be placed into the
+`yarn-site.xml` file instead.
+
+YARN Resource Model
+---
+
+### Resource Manager
+The resource manager is the final arbiter of what resources in the cluster are
+tracked. The resource manager loads its resource definition from XML
+configuration files. For example, to define a new resource in addition to
+CPU and memory, the following property should be configured:
+
+```xml
+
+  
+yarn.resource-types
+resource1,resource2
+
+The resources to be used for scheduling. Use resource-types.xml
+to specify details about the individual resource types.
+
+  
+
+```
+
+A valid resource name must begin with a letter and contain only letters, 
numbers,
+and any of: '.', '_', or '-'. A valid resource name may also be optionally
+preceded by a name space 

hadoop git commit: YARN-7442. [YARN-7069] Limit format of resource type name (Contributed by Wangda Tan via Daniel Templeton)

2017-11-13 Thread templedf
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 f90f77238 -> 5b55a74ba


YARN-7442. [YARN-7069] Limit format of resource type name (Contributed by 
Wangda Tan via Daniel Templeton)

(cherry picked from commit 2c6213a44280f5b3950167131293ff83f48ff56f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5b55a74b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5b55a74b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5b55a74b

Branch: refs/heads/branch-3.0
Commit: 5b55a74bad2fe02bc41ffc3e13d46468d4c92d22
Parents: f90f772
Author: Daniel Templeton 
Authored: Mon Nov 13 10:37:30 2017 -0800
Committer: Daniel Templeton 
Committed: Mon Nov 13 11:03:56 2017 -0800

--
 .../yarn/api/records/ResourceInformation.java   |  5 +++
 .../yarn/util/resource/ResourceUtils.java   | 26 ++
 .../yarn/util/resource/TestResourceUtils.java   | 37 
 3 files changed, 68 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5b55a74b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
index cc61d86..e8280ba 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
@@ -55,6 +55,11 @@ public class ResourceInformation implements 
Comparable {
   /**
* Set the name for the resource.
*
+   * A valid resource name must begin with a letter and contain only letters,
+   * numbers, and any of: '.', '_', or '-'. A valid resource name may also be
+   * optionally preceded by a name space followed by a slash. A valid name 
space
+   * consists of period-separated groups of letters, numbers, and dashes."
+   *
* @param rName name for the resource
*/
   public void setName(String rName) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5b55a74b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
index 002a6de..540cd9e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
@@ -62,6 +62,10 @@ public class ResourceUtils {
   private static final Pattern RESOURCE_REQUEST_VALUE_PATTERN =
   Pattern.compile("^([0-9]+) ?([a-zA-Z]*)$");
 
+  private static final Pattern RESOURCE_NAME_PATTERN = Pattern.compile(
+  "^(((\\p{Alnum}([\\p{Alnum}-]*\\p{Alnum})?\\.)*"
+  + "\\p{Alnum}([\\p{Alnum}-]*\\p{Alnum})?)/)?\\p{Alpha}([\\w.-]*)$");
+
   private static volatile boolean initializedResources = false;
   private static final Map RESOURCE_NAME_TO_INDEX =
   new ConcurrentHashMap();
@@ -210,6 +214,23 @@ public class ResourceUtils {
   }
 
   @VisibleForTesting
+  static void validateNameOfResourceNameAndThrowException(String resourceName)
+  throws YarnRuntimeException {
+Matcher matcher = RESOURCE_NAME_PATTERN.matcher(resourceName);
+if (!matcher.matches()) {
+  String message = String.format(
+  "'%s' is not a valid resource name. A valid resource name must"
+  + " begin with a letter and contain only letters, numbers, "
+  + "and any of: '.', '_', or '-'. A valid resource name may also"
+  + " be optionally preceded by a name space followed by a slash."
+  + " A valid name space consists of period-separated groups of"
+  + " letters, numbers, and dashes.",
+  resourceName);
+  throw new YarnRuntimeException(message);
+}
+  }
+
+  @VisibleForTesting
   static void initializeResourcesMap(Configuration conf) {
 
 Map resourceInformationMap = new HashMap<>();
@@ -247,6 +268,11 @@ public class ResourceUtils {
   }
 }
 
+// Validate names of 

hadoop git commit: YARN-7442. [YARN-7069] Limit format of resource type name (Contributed by Wangda Tan via Daniel Templeton)

2017-11-13 Thread templedf
Repository: hadoop
Updated Branches:
  refs/heads/trunk fa4b5c669 -> 2e512f016


YARN-7442. [YARN-7069] Limit format of resource type name (Contributed by 
Wangda Tan via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2e512f01
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2e512f01
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2e512f01

Branch: refs/heads/trunk
Commit: 2e512f016ed689b5afbf1e27fdcd7c9f75b6dc9c
Parents: fa4b5c6
Author: Daniel Templeton 
Authored: Mon Nov 13 10:37:30 2017 -0800
Committer: Daniel Templeton 
Committed: Mon Nov 13 11:03:30 2017 -0800

--
 .../yarn/api/records/ResourceInformation.java   |  5 +++
 .../yarn/util/resource/ResourceUtils.java   | 26 ++
 .../yarn/util/resource/TestResourceUtils.java   | 37 
 3 files changed, 68 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e512f01/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
index 59908ef..67592cc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
@@ -65,6 +65,11 @@ public class ResourceInformation implements 
Comparable {
   /**
* Set the name for the resource.
*
+   * A valid resource name must begin with a letter and contain only letters,
+   * numbers, and any of: '.', '_', or '-'. A valid resource name may also be
+   * optionally preceded by a name space followed by a slash. A valid name 
space
+   * consists of period-separated groups of letters, numbers, and dashes."
+   *
* @param rName name for the resource
*/
   public void setName(String rName) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e512f01/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
index 1170c72..3deace8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
@@ -62,6 +62,10 @@ public class ResourceUtils {
   private static final Pattern RESOURCE_REQUEST_VALUE_PATTERN =
   Pattern.compile("^([0-9]+) ?([a-zA-Z]*)$");
 
+  private static final Pattern RESOURCE_NAME_PATTERN = Pattern.compile(
+  "^(((\\p{Alnum}([\\p{Alnum}-]*\\p{Alnum})?\\.)*"
+  + "\\p{Alnum}([\\p{Alnum}-]*\\p{Alnum})?)/)?\\p{Alpha}([\\w.-]*)$");
+
   private static volatile boolean initializedResources = false;
   private static final Map RESOURCE_NAME_TO_INDEX =
   new ConcurrentHashMap();
@@ -209,6 +213,23 @@ public class ResourceUtils {
   }
 
   @VisibleForTesting
+  static void validateNameOfResourceNameAndThrowException(String resourceName)
+  throws YarnRuntimeException {
+Matcher matcher = RESOURCE_NAME_PATTERN.matcher(resourceName);
+if (!matcher.matches()) {
+  String message = String.format(
+  "'%s' is not a valid resource name. A valid resource name must"
+  + " begin with a letter and contain only letters, numbers, "
+  + "and any of: '.', '_', or '-'. A valid resource name may also"
+  + " be optionally preceded by a name space followed by a slash."
+  + " A valid name space consists of period-separated groups of"
+  + " letters, numbers, and dashes.",
+  resourceName);
+  throw new YarnRuntimeException(message);
+}
+  }
+
+  @VisibleForTesting
   static void initializeResourcesMap(Configuration conf) {
 
 Map resourceInformationMap = new HashMap<>();
@@ -246,6 +267,11 @@ public class ResourceUtils {
   }
 }
 
+// Validate names of resource information map.
+for (String name : resourceInformationMap.keySet()) {
+ 

hadoop git commit: YARN-7447. Fixed bug in create YARN services via RM. (Contributed by Billie Rinaldi)

2017-11-13 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0d6bab94c -> fa4b5c669


YARN-7447. Fixed bug in create YARN services via RM.  (Contributed by Billie 
Rinaldi)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa4b5c66
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa4b5c66
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa4b5c66

Branch: refs/heads/trunk
Commit: fa4b5c669c04d83d92bc73ad72e8311d93c3ed0d
Parents: 0d6bab9
Author: Eric Yang 
Authored: Mon Nov 13 13:59:58 2017 -0500
Committer: Eric Yang 
Committed: Mon Nov 13 13:59:58 2017 -0500

--
 hadoop-yarn-project/hadoop-yarn/bin/yarn | 8 
 1 file changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa4b5c66/hadoop-yarn-project/hadoop-yarn/bin/yarn
--
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn 
b/hadoop-yarn-project/hadoop-yarn/bin/yarn
index 00596c2..d7b44b9 100755
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn
@@ -149,6 +149,14 @@ ${HADOOP_COMMON_HOME}/${HADOOP_COMMON_LIB_JARS_DIR}"
   if [[ -n "${YARN_RESOURCEMANAGER_HEAPSIZE}" ]]; then
 HADOOP_HEAPSIZE_MAX="${YARN_RESOURCEMANAGER_HEAPSIZE}"
   fi
+  local sld="${HADOOP_YARN_HOME}/${YARN_DIR},\
+${HADOOP_YARN_HOME}/${YARN_LIB_JARS_DIR},\
+${HADOOP_HDFS_HOME}/${HDFS_DIR},\
+${HADOOP_HDFS_HOME}/${HDFS_LIB_JARS_DIR},\
+${HADOOP_COMMON_HOME}/${HADOOP_COMMON_DIR},\
+${HADOOP_COMMON_HOME}/${HADOOP_COMMON_LIB_JARS_DIR}"
+  hadoop_translate_cygwin_path sld
+  hadoop_add_param HADOOP_OPTS service.libdir "-Dservice.libdir=${sld}"
 ;;
 rmadmin)
   HADOOP_CLASSNAME='org.apache.hadoop.yarn.client.cli.RMAdminCLI'


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/2] hadoop git commit: HADOOP-15008. Fixed period unit calculation for Hadoop Metrics V2. (Contribute by Erik Krogen)

2017-11-13 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 782681c73 -> 0d6bab94c


HADOOP-15008.  Fixed period unit calculation for Hadoop Metrics V2.  
(Contribute by Erik Krogen)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1b68b8ff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1b68b8ff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1b68b8ff

Branch: refs/heads/trunk
Commit: 1b68b8ff2c6d4704f748d47fc0b903636f3e98c7
Parents: 975a57a
Author: Eric Yang 
Authored: Mon Nov 13 12:40:45 2017 -0500
Committer: Eric Yang 
Committed: Mon Nov 13 12:42:43 2017 -0500

--
 .../metrics2/impl/MetricsSinkAdapter.java   | 12 ++---
 .../hadoop/metrics2/impl/MetricsSystemImpl.java |  7 ++-
 .../metrics2/impl/TestMetricsSystemImpl.java| 49 
 3 files changed, 61 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b68b8ff/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
index 1199ebd..f2e607b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
@@ -51,7 +51,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
   private final Thread sinkThread;
   private volatile boolean stopping = false;
   private volatile boolean inError = false;
-  private final int period, firstRetryDelay, retryCount;
+  private final int periodMs, firstRetryDelay, retryCount;
   private final long oobPutTimeout;
   private final float retryBackoff;
   private final MetricsRegistry registry = new MetricsRegistry("sinkadapter");
@@ -62,7 +62,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
   MetricsSinkAdapter(String name, String description, MetricsSink sink,
  String context, MetricsFilter sourceFilter,
  MetricsFilter recordFilter, MetricsFilter metricFilter,
- int period, int queueCapacity, int retryDelay,
+ int periodMs, int queueCapacity, int retryDelay,
  float retryBackoff, int retryCount) {
 this.name = checkNotNull(name, "name");
 this.description = description;
@@ -71,7 +71,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
 this.sourceFilter = sourceFilter;
 this.recordFilter = recordFilter;
 this.metricFilter = metricFilter;
-this.period = checkArg(period, period > 0, "period");
+this.periodMs = checkArg(periodMs, periodMs > 0, "period");
 firstRetryDelay = checkArg(retryDelay, retryDelay > 0, "retry delay");
 this.retryBackoff = checkArg(retryBackoff, retryBackoff>1, "retry 
backoff");
 oobPutTimeout = (long)
@@ -93,9 +93,9 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
 sinkThread.setDaemon(true);
   }
 
-  boolean putMetrics(MetricsBuffer buffer, long logicalTime) {
-if (logicalTime % period == 0) {
-  LOG.debug("enqueue, logicalTime="+ logicalTime);
+  boolean putMetrics(MetricsBuffer buffer, long logicalTimeMs) {
+if (logicalTimeMs % periodMs == 0) {
+  LOG.debug("enqueue, logicalTime="+ logicalTimeMs);
   if (queue.enqueue(buffer)) {
 refreshQueueSizeGauge();
 return true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b68b8ff/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
index ee1672e..624edc9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
@@ -519,7 +519,7 @@ public class MetricsSystemImpl extends MetricsSystem 
implements MetricsSource {
 conf.getFilter(SOURCE_FILTER_KEY),
 conf.getFilter(RECORD_FILTER_KEY),
 conf.getFilter(METRIC_FILTER_KEY),
-conf.getInt(PERIOD_KEY, PERIOD_DEFAULT),
+conf.getInt(PERIOD_KEY, PERIOD_DEFAULT) * 1000,
 conf.getInt(QUEUE_CAPACITY_KEY, 

[2/2] hadoop git commit: Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into trunk

2017-11-13 Thread eyang
Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into 
trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0d6bab94
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0d6bab94
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0d6bab94

Branch: refs/heads/trunk
Commit: 0d6bab94c49dbc783912ad9903e3d76849b8122d
Parents: 1b68b8f 782681c
Author: Eric Yang 
Authored: Mon Nov 13 12:43:18 2017 -0500
Committer: Eric Yang 
Committed: Mon Nov 13 12:43:18 2017 -0500

--

--



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15008. Fixed period unit calculation for Hadoop Metrics V2.

2017-11-13 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 975a57a68 -> 782681c73


HADOOP-15008.  Fixed period unit calculation for Hadoop Metrics V2.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/782681c7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/782681c7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/782681c7

Branch: refs/heads/trunk
Commit: 782681c73e4ae7a02206d4d26635bb1e4984fa24
Parents: 975a57a
Author: Eric Yang 
Authored: Mon Nov 13 12:40:45 2017 -0500
Committer: Eric Yang 
Committed: Mon Nov 13 12:40:45 2017 -0500

--
 .../metrics2/impl/MetricsSinkAdapter.java   | 12 ++---
 .../hadoop/metrics2/impl/MetricsSystemImpl.java |  7 ++-
 .../metrics2/impl/TestMetricsSystemImpl.java| 49 
 3 files changed, 61 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/782681c7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
index 1199ebd..f2e607b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSinkAdapter.java
@@ -51,7 +51,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
   private final Thread sinkThread;
   private volatile boolean stopping = false;
   private volatile boolean inError = false;
-  private final int period, firstRetryDelay, retryCount;
+  private final int periodMs, firstRetryDelay, retryCount;
   private final long oobPutTimeout;
   private final float retryBackoff;
   private final MetricsRegistry registry = new MetricsRegistry("sinkadapter");
@@ -62,7 +62,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
   MetricsSinkAdapter(String name, String description, MetricsSink sink,
  String context, MetricsFilter sourceFilter,
  MetricsFilter recordFilter, MetricsFilter metricFilter,
- int period, int queueCapacity, int retryDelay,
+ int periodMs, int queueCapacity, int retryDelay,
  float retryBackoff, int retryCount) {
 this.name = checkNotNull(name, "name");
 this.description = description;
@@ -71,7 +71,7 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
 this.sourceFilter = sourceFilter;
 this.recordFilter = recordFilter;
 this.metricFilter = metricFilter;
-this.period = checkArg(period, period > 0, "period");
+this.periodMs = checkArg(periodMs, periodMs > 0, "period");
 firstRetryDelay = checkArg(retryDelay, retryDelay > 0, "retry delay");
 this.retryBackoff = checkArg(retryBackoff, retryBackoff>1, "retry 
backoff");
 oobPutTimeout = (long)
@@ -93,9 +93,9 @@ class MetricsSinkAdapter implements 
SinkQueue.Consumer {
 sinkThread.setDaemon(true);
   }
 
-  boolean putMetrics(MetricsBuffer buffer, long logicalTime) {
-if (logicalTime % period == 0) {
-  LOG.debug("enqueue, logicalTime="+ logicalTime);
+  boolean putMetrics(MetricsBuffer buffer, long logicalTimeMs) {
+if (logicalTimeMs % periodMs == 0) {
+  LOG.debug("enqueue, logicalTime="+ logicalTimeMs);
   if (queue.enqueue(buffer)) {
 refreshQueueSizeGauge();
 return true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/782681c7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
index ee1672e..624edc9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
@@ -519,7 +519,7 @@ public class MetricsSystemImpl extends MetricsSystem 
implements MetricsSource {
 conf.getFilter(SOURCE_FILTER_KEY),
 conf.getFilter(RECORD_FILTER_KEY),
 conf.getFilter(METRIC_FILTER_KEY),
-conf.getInt(PERIOD_KEY, PERIOD_DEFAULT),
+conf.getInt(PERIOD_KEY, PERIOD_DEFAULT) * 1000,
 conf.getInt(QUEUE_CAPACITY_KEY, QUEUE_CAPACITY_DEFAULT),
 

hadoop git commit: HADOOP-15031. Fix javadoc issues in Hadoop Common. Contributed by Mukul Kumar Singh.

2017-11-13 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk fb62bd625 -> 975a57a68


HADOOP-15031. Fix javadoc issues in Hadoop Common. Contributed by Mukul Kumar 
Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/975a57a6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/975a57a6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/975a57a6

Branch: refs/heads/trunk
Commit: 975a57a6886e81e412bea35bf597beccc807a66f
Parents: fb62bd6
Author: Akira Ajisaka 
Authored: Mon Nov 13 23:11:03 2017 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 13 23:12:23 2017 +0900

--
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java| 3 +--
 .../hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java | 3 +--
 2 files changed, 2 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/975a57a6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index 64021ad..be0ec87 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -973,8 +973,7 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* @param opt If absent, assume {@link HandleOpt#path()}.
* @throws IllegalArgumentException If the FileStatus does not belong to
* this FileSystem
-   * @throws UnsupportedOperationException If
-   * {@link #createPathHandle(FileStatus, HandleOpt[])}
+   * @throws UnsupportedOperationException If {@link #createPathHandle}
* not overridden by subclass.
* @throws UnsupportedOperationException If this FileSystem cannot enforce
* the specified constraints.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/975a57a6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
index 550e6b9..e455abf 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
@@ -338,8 +338,7 @@ public final class Options {
 }
 
 /**
- * Utility function for mapping
- * {@link FileSystem#getPathHandle(FileStatus, HandleOpt[])} to a
+ * Utility function for mapping {@link FileSystem#getPathHandle} to a
  * fixed set of handle options.
  * @param fs Target filesystem
  * @param opt Options to bind in partially evaluated function


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-7445. Render Applications and Services page with filters in new YARN UI. Contributed by Vasudevan Skm.

2017-11-13 Thread sunilg
Repository: hadoop
Updated Branches:
  refs/heads/trunk 3e2607784 -> fb62bd625


YARN-7445. Render Applications and Services page with filters in new YARN UI. 
Contributed by Vasudevan Skm.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fb62bd62
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fb62bd62
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fb62bd62

Branch: refs/heads/trunk
Commit: fb62bd625f53f0407f711317b208a6e4de8e43bc
Parents: 3e26077
Author: Sunil G 
Authored: Mon Nov 13 19:41:49 2017 +0530
Committer: Sunil G 
Committed: Mon Nov 13 19:41:49 2017 +0530

--
 .gitignore  |   3 +
 .../components/em-table-simple-status-cell.js   |  31 ++
 .../webapp/app/controllers/app-table-columns.js |  30 --
 .../webapp/app/controllers/yarn-apps/apps.js|   5 +-
 .../webapp/app/controllers/yarn-services.js |   4 +-
 .../src/main/webapp/app/styles/app.css  | 101 +--
 .../components/em-table-simple-status-cell.hbs  |  27 +
 .../src/main/webapp/app/templates/yarn-apps.hbs |  64 +---
 .../main/webapp/app/templates/yarn-services.hbs |  74 ++
 .../hadoop-yarn-ui/src/main/webapp/bower.json   |   3 +-
 .../src/main/webapp/config/environment.js   |   1 -
 .../src/main/webapp/ember-cli-build.js  |   1 +
 .../src/main/webapp/jsconfig.json   |  10 +-
 .../hadoop-yarn-ui/src/main/webapp/package.json |   2 +-
 .../em-table-simple-status-cell-test.js |  43 
 .../hadoop-yarn-ui/src/main/webapp/yarn.lock|   6 +-
 16 files changed, 250 insertions(+), 155 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb62bd62/.gitignore
--
diff --git a/.gitignore b/.gitignore
index 724162d..817556f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -44,3 +44,6 @@ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/dist
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tmp
 yarnregistry.pdf
 patchprocess/
+
+
+.history/
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb62bd62/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-simple-status-cell.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-simple-status-cell.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-simple-status-cell.js
new file mode 100644
index 000..af8b605
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-simple-status-cell.js
@@ -0,0 +1,31 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Component.extend({
+  content: null,
+
+  classNames: ["em-table-simple-status-cell"],
+
+  statusName: Ember.computed("content", function () {
+var status = this.get("content");
+
+return status.toLowerCase().capitalize();
+  }),
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb62bd62/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
index 8a34f1a..05bfad45 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
@@ -34,7 +34,8 @@ export default Ember.Controller.extend({
   headerTitle: 'Application ID',
   contentPath: 'id',
   cellComponentName: 'em-table-linked-cell',
-