[hadoop] branch docker-hadoop-runner updated: HDDS-1632. Make the hadoop home word readble and avoid sudo in hadoop-runner

2019-06-03 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch docker-hadoop-runner
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/docker-hadoop-runner by this 
push:
 new f671b56  HDDS-1632. Make the hadoop home word readble and avoid sudo 
in hadoop-runner
f671b56 is described below

commit f671b56a54f8c86a6b670553e543f0c25758e7be
Author: Márton Elek 
AuthorDate: Mon Jun 3 10:21:01 2019 +0200

HDDS-1632. Make the hadoop home word readble and avoid sudo in hadoop-runner
---
 Dockerfile | 1 +
 scripts/starter.sh | 5 +
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/Dockerfile b/Dockerfile
index 21299df..d7234ca 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -31,6 +31,7 @@ ENV PATH $PATH:/opt/hadoop/bin
 
 RUN groupadd --gid 1000 hadoop
 RUN useradd --uid 1000 hadoop --gid 100 --home /opt/hadoop
+RUN chmod 755 /opt/hadoop
 RUN echo "hadoop ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
 RUN chown hadoop /opt
 ADD scripts /opt/
diff --git a/scripts/starter.sh b/scripts/starter.sh
index 1328607..6b5bbe2 100755
--- a/scripts/starter.sh
+++ b/scripts/starter.sh
@@ -96,9 +96,6 @@ if [ -n "$KERBEROS_ENABLED" ]; then
 sed "s/SERVER/$KERBEROS_SERVER/g" "$DIR"/krb5.conf | sudo tee 
/etc/krb5.conf
 fi
 
-#To avoid docker volume permission problems
-sudo chmod o+rwx /data
-
 "$DIR"/envtoconf.py --destination "${HADOOP_CONF_DIR:-/opt/hadoop/etc/hadoop}"
 
 if [ -n "$ENSURE_NAMENODE_DIR" ]; then
@@ -139,7 +136,7 @@ if [ -n "$BYTEMAN_SCRIPT" ] || [ -n "$BYTEMAN_SCRIPT_URL" 
]; then
   export PATH=$PATH:$BYTEMAN_DIR/bin
 
   if [ ! -z "$BYTEMAN_SCRIPT_URL" ]; then
-sudo wget $BYTEMAN_SCRIPT_URL -O /tmp/byteman.btm
+wget $BYTEMAN_SCRIPT_URL -O /tmp/byteman.btm
 export BYTEMAN_SCRIPT=/tmp/byteman.btm
   fi
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-8906. [UI2] NM hostnames not displayed correctly in Node Heatmap Chart. Contributed by Akhil PB.

2019-06-03 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 59719dc  YARN-8906. [UI2] NM hostnames not displayed correctly in Node 
Heatmap Chart. Contributed by Akhil PB.
59719dc is described below

commit 59719dc560cf67f485d8e5b4a6f0f38ef97d536b
Author: Sunil G 
AuthorDate: Mon Jun 3 15:53:23 2019 +0530

YARN-8906. [UI2] NM hostnames not displayed correctly in Node Heatmap 
Chart. Contributed by Akhil PB.
---
 .../hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js  | 6 --
 .../src/main/webapp/app/templates/components/nodes-heatmap.hbs  | 1 -
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
index 1f772de..7eac266 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
@@ -230,12 +230,14 @@ export default BaseChartComponent.extend({
 var node_id = data.get("id"),
 node_addr = encodeURIComponent(data.get("nodeHTTPAddress")),
 href = `#/yarn-node/${node_id}/${node_addr}/info`;
+var nodeHostName = data.get("nodeHostName");
 var a = g.append("a")
   .attr("href", href);
 a.append("text")
-  .text(data.get("nodeHostName"))
+  .text(nodeHostName.length > 30 ? nodeHostName.substr(0, 30) + '...' : 
nodeHostName)
   .attr("y", yOffset + this.CELL_HEIGHT / 2 + 5)
-  .attr("x", xOffset + this.CELL_WIDTH / 2)
+  .attr("x", nodeHostName.length > 30 ? xOffset + 10 : xOffset + 
this.CELL_WIDTH / 2)
+  .style("text-anchor", nodeHostName.length > 30 ? "start" : "middle")
   .attr("class", this.isNodeSelected(data) ? "heatmap-cell" : 
"heatmap-cell-notselected");
 if (this.isNodeSelected(data)) {
   this.bindTP(a, rect);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
index f68bba6..d1ac8e7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
@@ -25,4 +25,3 @@
 
   
 
-
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: YARN-8906. [UI2] NM hostnames not displayed correctly in Node Heatmap Chart. Contributed by Akhil PB.

2019-06-03 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 2f01204  YARN-8906. [UI2] NM hostnames not displayed correctly in Node 
Heatmap Chart. Contributed by Akhil PB.
2f01204 is described below

commit 2f012044ff6b8e14a3c2138305ed6e177f3c92dd
Author: Sunil G 
AuthorDate: Mon Jun 3 15:53:23 2019 +0530

YARN-8906. [UI2] NM hostnames not displayed correctly in Node Heatmap 
Chart. Contributed by Akhil PB.

(cherry picked from commit 59719dc560cf67f485d8e5b4a6f0f38ef97d536b)
---
 .../hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js  | 6 --
 .../src/main/webapp/app/templates/components/nodes-heatmap.hbs  | 1 -
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
index 1f772de..7eac266 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
@@ -230,12 +230,14 @@ export default BaseChartComponent.extend({
 var node_id = data.get("id"),
 node_addr = encodeURIComponent(data.get("nodeHTTPAddress")),
 href = `#/yarn-node/${node_id}/${node_addr}/info`;
+var nodeHostName = data.get("nodeHostName");
 var a = g.append("a")
   .attr("href", href);
 a.append("text")
-  .text(data.get("nodeHostName"))
+  .text(nodeHostName.length > 30 ? nodeHostName.substr(0, 30) + '...' : 
nodeHostName)
   .attr("y", yOffset + this.CELL_HEIGHT / 2 + 5)
-  .attr("x", xOffset + this.CELL_WIDTH / 2)
+  .attr("x", nodeHostName.length > 30 ? xOffset + 10 : xOffset + 
this.CELL_WIDTH / 2)
+  .style("text-anchor", nodeHostName.length > 30 ? "start" : "middle")
   .attr("class", this.isNodeSelected(data) ? "heatmap-cell" : 
"heatmap-cell-notselected");
 if (this.isNodeSelected(data)) {
   this.bindTP(a, rect);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
index f68bba6..d1ac8e7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
@@ -25,4 +25,3 @@
 
   
 
-
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: YARN-8906. [UI2] NM hostnames not displayed correctly in Node Heatmap Chart. Contributed by Akhil PB.

2019-06-03 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 6be665c  YARN-8906. [UI2] NM hostnames not displayed correctly in Node 
Heatmap Chart. Contributed by Akhil PB.
6be665c is described below

commit 6be665cfc6850aea612f40e26b3b6d5c0a0a3f41
Author: Sunil G 
AuthorDate: Mon Jun 3 15:53:23 2019 +0530

YARN-8906. [UI2] NM hostnames not displayed correctly in Node Heatmap 
Chart. Contributed by Akhil PB.

(cherry picked from commit 59719dc560cf67f485d8e5b4a6f0f38ef97d536b)
---
 .../hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js  | 6 --
 .../src/main/webapp/app/templates/components/nodes-heatmap.hbs  | 1 -
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
index 1f772de..7eac266 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/nodes-heatmap.js
@@ -230,12 +230,14 @@ export default BaseChartComponent.extend({
 var node_id = data.get("id"),
 node_addr = encodeURIComponent(data.get("nodeHTTPAddress")),
 href = `#/yarn-node/${node_id}/${node_addr}/info`;
+var nodeHostName = data.get("nodeHostName");
 var a = g.append("a")
   .attr("href", href);
 a.append("text")
-  .text(data.get("nodeHostName"))
+  .text(nodeHostName.length > 30 ? nodeHostName.substr(0, 30) + '...' : 
nodeHostName)
   .attr("y", yOffset + this.CELL_HEIGHT / 2 + 5)
-  .attr("x", xOffset + this.CELL_WIDTH / 2)
+  .attr("x", nodeHostName.length > 30 ? xOffset + 10 : xOffset + 
this.CELL_WIDTH / 2)
+  .style("text-anchor", nodeHostName.length > 30 ? "start" : "middle")
   .attr("class", this.isNodeSelected(data) ? "heatmap-cell" : 
"heatmap-cell-notselected");
 if (this.isNodeSelected(data)) {
   this.bindTP(a, rect);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
index f68bba6..d1ac8e7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/nodes-heatmap.hbs
@@ -25,4 +25,3 @@
 
   
 
-
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.0 updated: HADOOP-16212. Update guava to 27.0-jre in hadoop-project branch-3.0. Contributed by Gabor Bota.

2019-06-03 Thread mackrorysd
This is an automated email from the ASF dual-hosted git repository.

mackrorysd pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c352b00  HADOOP-16212. Update guava to 27.0-jre in hadoop-project 
branch-3.0. Contributed by Gabor Bota.
c352b00 is described below

commit c352b0011ed2057dcd699689045095941dddc130
Author: Sean Mackrory 
AuthorDate: Mon Jun 3 07:45:56 2019 -0600

HADOOP-16212. Update guava to 27.0-jre in hadoop-project branch-3.0. 
Contributed by Gabor Bota.
---
 .../hadoop-common/dev-support/findbugsExcludeFile.xml|  7 +++
 .../main/java/org/apache/hadoop/conf/Configuration.java  |  5 +++--
 .../src/main/java/org/apache/hadoop/security/Groups.java |  2 +-
 .../apache/hadoop/util/SemaphoredDelegatingExecutor.java | 14 +++---
 .../src/main/java/org/apache/hadoop/util/ZKUtil.java |  2 +-
 .../java/org/apache/hadoop/net/TestTableMapping.java | 16 
 .../src/test/java/org/apache/hadoop/util/TestZKUtil.java |  2 +-
 .../hadoop-kms/dev-support/findbugsExcludeFile.xml   |  8 
 .../org/apache/hadoop/crypto/key/kms/server/KMS.java |  2 +-
 .../server/federation/resolver/order/LocalResolver.java  |  2 +-
 .../hadoop-hdfs/dev-support/findbugsExcludeFile.xml  |  8 
 .../hadoop/hdfs/qjournal/client/IPCLoggerChannel.java|  2 +-
 .../apache/hadoop/hdfs/qjournal/client/QuorumCall.java   |  3 ++-
 .../server/datanode/checker/DatasetVolumeChecker.java| 13 +++--
 .../server/datanode/checker/ThrottledAsyncChecker.java   |  2 +-
 .../hadoop/hdfs/server/namenode/ReencryptionHandler.java |  2 +-
 .../checker/TestThrottledAsyncCheckerTimeout.java| 11 +++
 .../hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java |  6 +++---
 .../apache/hadoop/mapred/LocatedFileStatusFetcher.java   |  9 ++---
 hadoop-project/pom.xml   |  2 +-
 20 files changed, 75 insertions(+), 43 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml 
b/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
index 4bafd8e..aea425d 100644
--- a/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
@@ -409,6 +409,13 @@
 
   
 
+  
+  
+
+
+
+  
+
   
 
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 90557d1..5e98bb0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -68,6 +68,7 @@ import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicReference;
 
+import javax.annotation.Nullable;
 import javax.xml.parsers.DocumentBuilderFactory;
 import javax.xml.parsers.ParserConfigurationException;
 import javax.xml.stream.XMLInputFactory;
@@ -3361,7 +3362,7 @@ public class Configuration implements 
Iterable>,
* 
* @param out the writer to write to.
*/
-  public void writeXml(String propertyName, Writer out)
+  public void writeXml(@Nullable String propertyName, Writer out)
   throws IOException, IllegalArgumentException {
 Document doc = asXmlDocument(propertyName);
 
@@ -3383,7 +3384,7 @@ public class Configuration implements 
Iterable>,
   /**
* Return the XML DOM corresponding to this Configuration.
*/
-  private synchronized Document asXmlDocument(String propertyName)
+  private synchronized Document asXmlDocument(@Nullable String propertyName)
   throws IOException, IllegalArgumentException {
 Document doc;
 try {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
index 63ec9a5..b29278b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
@@ -375,7 +375,7 @@ public class Groups {
   backgroundRefreshException.incrementAndGet();
   backgroundRefreshRunning.decrementAndGet();
 }
-  });
+  }, MoreExecutors.directExecutor());
   return listenableFuture;
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SemaphoredDelegatingExecutor.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SemaphoredDelegatingExecutor.java
index bcc19e3..4ec77e7 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/ja

[hadoop] branch trunk updated: YARN-9580. Fulfilled reservation information in assignment is lost when transferring in ParentQueue#assignContainers. Contributed by Tao Yang.

2019-06-03 Thread wwei
This is an automated email from the ASF dual-hosted git repository.

wwei pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new bd2590d  YARN-9580. Fulfilled reservation information in assignment is 
lost when transferring in ParentQueue#assignContainers. Contributed by Tao Yang.
bd2590d is described below

commit bd2590d71ba1f3db1c686f7afeaf51382f8d8a2f
Author: Weiwei Yang 
AuthorDate: Mon Jun 3 22:59:02 2019 +0800

YARN-9580. Fulfilled reservation information in assignment is lost when 
transferring in ParentQueue#assignContainers. Contributed by Tao Yang.
---
 .../scheduler/capacity/ParentQueue.java|  4 ++
 .../capacity/TestCapacitySchedulerMultiNodes.java  | 57 ++
 2 files changed, 61 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
index 8a7acd6..c56369c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
@@ -631,6 +631,10 @@ public class ParentQueue extends AbstractCSQueue {
   assignedToChild.getRequestLocalityType());
   assignment.setExcessReservation(assignedToChild.getExcessReservation());
   assignment.setContainersToKill(assignedToChild.getContainersToKill());
+  assignment.setFulfilledReservation(
+  assignedToChild.isFulfilledReservation());
+  assignment.setFulfilledReservedContainer(
+  assignedToChild.getFulfilledReservedContainer());
 
   // Done if no child-queue assigned anything
   if (Resources.greaterThan(resourceCalculator, clusterResource,
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerMultiNodes.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerMultiNodes.java
index 6c9faa6..0e29576 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerMultiNodes.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerMultiNodes.java
@@ -245,4 +245,61 @@ public class TestCapacitySchedulerMultiNodes extends 
CapacitySchedulerTestBase {
 
 rm1.close();
   }
+
+  @Test(timeout=3)
+  public void testAllocateForReservedContainer() throws Exception {
+CapacitySchedulerConfiguration newConf =
+new CapacitySchedulerConfiguration(conf);
+newConf.set(YarnConfiguration.RM_PLACEMENT_CONSTRAINTS_HANDLER,
+YarnConfiguration.SCHEDULER_RM_PLACEMENT_CONSTRAINTS_HANDLER);
+
newConf.setInt(CapacitySchedulerConfiguration.MULTI_NODE_SORTING_POLICY_NAME
++ ".resource-based.sorting-interval.ms", 0);
+newConf.setMaximumApplicationMasterResourcePerQueuePercent("root.default",
+1.0f);
+MockRM rm1 = new MockRM(newConf);
+
+rm1.start();
+MockNM nm1 = rm1.registerNode("h1:1234", 8 * GB);
+MockNM nm2 = rm1.registerNode("h2:1234", 8 * GB);
+
+// launch an app to queue, AM container should be launched in nm1
+RMApp app1 = rm1.submitApp(5 * GB, "app", "user", null, "default");
+MockAM am1 = MockRM.launchAndRegisterAM(app1, rm1, nm1);
+
+// launch another app to queue, AM container should be launched in nm2
+RMApp app2 = rm1.submitApp(5 * GB, "app", "user", null, "default");
+MockAM am2 = MockRM.launchAndRegisterAM(app2, rm1, nm2);
+
+CapacityScheduler cs = (CapacityScheduler) rm1.getResourceScheduler();
+RMNode rmNode1 = rm1.getRMContext().getRMNodes().get(nm1.getNodeId());
+FiCaSchedulerApp schedulerApp1 =
+cs.getApplicationAttempt(am1.getApplicationAttemptId());
+FiCaSchedulerApp schedulerApp2 =
+cs.getApplicationAttempt(am2.getApplicationAttemptId());
+
+/*
+ * Verify that reserved container will be allocated
+ * after node has sufficient resource.
+ */
+// Ask a container with 6GB memory size for app2,
+// nm1 will reserve a container

[hadoop] branch trunk updated: HDDS-1558. IllegalArgumentException while processing container Reports.

2019-06-03 Thread nanda
This is an automated email from the ASF dual-hosted git repository.

nanda pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f327112  HDDS-1558. IllegalArgumentException while processing 
container Reports.
f327112 is described below

commit f3271126fc9a3ad178b7dadd8edf851e16cf76d0
Author: Shashikant Banerjee 
AuthorDate: Tue Jun 4 00:59:02 2019 +0530

HDDS-1558. IllegalArgumentException while processing container Reports.

Signed-off-by: Nanda kumar 
---
 .../container/common/impl/HddsDispatcher.java  | 15 +++-
 .../ozone/container/common/interfaces/Handler.java |  9 +++
 .../container/keyvalue/KeyValueContainer.java  |  6 +-
 .../ozone/container/keyvalue/KeyValueHandler.java  | 14 
 .../rpc/TestContainerStateMachineFailures.java | 85 ++
 5 files changed, 125 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
index 4e8d5b9..6f56b3c 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
@@ -67,6 +67,7 @@ import io.opentracing.Scope;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import java.io.IOException;
 import java.util.Map;
 import java.util.Optional;
 import java.util.Set;
@@ -299,8 +300,18 @@ public class HddsDispatcher implements 
ContainerDispatcher, Auditor {
 State containerState = container.getContainerData().getState();
 Preconditions.checkState(
 containerState == State.OPEN || containerState == State.CLOSING);
-container.getContainerData()
-.setState(ContainerDataProto.State.UNHEALTHY);
+// mark and persist the container state to be unhealthy
+try {
+  handler.markContainerUhealthy(container);
+} catch (IOException ioe) {
+  // just log the error here in case marking the container fails,
+  // Return the actual failure response to the client
+  LOG.error("Failed to mark container " + containerID + " UNHEALTHY. ",
+  ioe);
+}
+// in any case, the in memory state of the container should be 
unhealthy
+Preconditions.checkArgument(
+container.getContainerData().getState() == State.UNHEALTHY);
 sendCloseContainerActionIfNeeded(container);
   }
 
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Handler.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Handler.java
index a3bb34b..52d14db 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Handler.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Handler.java
@@ -130,6 +130,15 @@ public abstract class Handler {
   throws IOException;
 
   /**
+   * Marks the container Unhealthy. Moves the container to UHEALTHY state.
+   *
+   * @param container container to update
+   * @throws IOException in case of exception
+   */
+  public abstract void markContainerUhealthy(Container container)
+  throws IOException;
+
+  /**
* Moves the Container to QUASI_CLOSED state.
*
* @param container container to be quasi closed
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
index 38257c3..6a1ca86 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
@@ -339,8 +339,10 @@ public class KeyValueContainer implements 
Container {
   updateContainerFile(containerFile);
 
 } catch (StorageContainerException ex) {
-  if (oldState != null) {
-// Failed to update .container file. Reset the state to CLOSING
+  if (oldState != null
+  && containerData.getState() != ContainerDataProto.State.UNHEALTHY) {
+// Failed to update .container file. Reset the state to old state only
+// if the current state is not unhealthy.
 containerData.setState(oldState);
   }
   throw ex;
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/cont

[hadoop] branch trunk updated: HDDS-1625 : ConcurrentModificationException when SCM has containers of different owners. (#883)

2019-06-03 Thread nanda
This is an automated email from the ASF dual-hosted git repository.

nanda pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 21de9af  HDDS-1625 : ConcurrentModificationException when SCM has 
containers of different owners. (#883)
21de9af is described below

commit 21de9af9038961e36e7335dc1f688f5f48056d1c
Author: avijayanhwx <14299376+avijayan...@users.noreply.github.com>
AuthorDate: Mon Jun 3 12:45:04 2019 -0700

HDDS-1625 : ConcurrentModificationException when SCM has containers of 
different owners. (#883)
---
 .../hdds/scm/container/SCMContainerManager.java|  9 +---
 .../TestContainerStateManagerIntegration.java  | 24 ++
 2 files changed, 30 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
index 359731c..1c1ffe1 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
@@ -43,6 +43,7 @@ import java.io.File;
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collections;
+import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.NavigableSet;
@@ -469,15 +470,17 @@ public class SCMContainerManager implements 
ContainerManager {
*/
   private NavigableSet getContainersForOwner(
   NavigableSet containerIDs, String owner) {
-for (ContainerID cid : containerIDs) {
+Iterator containerIDIterator = containerIDs.iterator();
+while (containerIDIterator.hasNext()) {
+  ContainerID cid = containerIDIterator.next();
   try {
 if (!getContainer(cid).getOwner().equals(owner)) {
-  containerIDs.remove(cid);
+  containerIDIterator.remove();
 }
   } catch (ContainerNotFoundException e) {
 LOG.error("Could not find container info for container id={} {}", cid,
 e);
-containerIDs.remove(cid);
+containerIDIterator.remove();
   }
 }
 return containerIDs;
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManagerIntegration.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManagerIntegration.java
index 9f90a2d..e4f1a37 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManagerIntegration.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManagerIntegration.java
@@ -123,6 +123,30 @@ public class TestContainerStateManagerIntegration {
   }
 
   @Test
+  public void testAllocateContainerWithDifferentOwner() throws IOException {
+
+// Allocate a container and verify the container info
+ContainerWithPipeline container1 = scm.getClientProtocolServer()
+.allocateContainer(xceiverClientManager.getType(),
+xceiverClientManager.getFactor(), containerOwner);
+ContainerInfo info = containerManager
+.getMatchingContainer(OzoneConsts.GB * 3, containerOwner,
+container1.getPipeline());
+Assert.assertNotNull(info);
+
+String newContainerOwner = "OZONE_NEW";
+ContainerWithPipeline container2 = scm.getClientProtocolServer()
+.allocateContainer(xceiverClientManager.getType(),
+xceiverClientManager.getFactor(), newContainerOwner);
+ContainerInfo info2 = containerManager
+.getMatchingContainer(OzoneConsts.GB * 3, newContainerOwner,
+container1.getPipeline());
+Assert.assertNotNull(info2);
+
+Assert.assertNotEquals(info.containerID(), info2.containerID());
+  }
+
+  @Test
   public void testContainerStateManagerRestart() throws IOException,
   TimeoutException, InterruptedException, AuthenticationException {
 // Allocate 5 containers in ALLOCATED state and 5 in CREATING state


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: Opening of rocksDB in datanode fails with "No locks available"

2019-06-03 Thread nanda
This is an automated email from the ASF dual-hosted git repository.

nanda pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 277e9a8  Opening of rocksDB in datanode fails with "No locks available"
277e9a8 is described below

commit 277e9a835b5b45af8df70b0dca52c03074f0d6b5
Author: Mukul Kumar Singh 
AuthorDate: Tue Jun 4 02:12:44 2019 +0530

Opening of rocksDB in datanode fails with "No locks available"

Signed-off-by: Nanda kumar 
---
 .../container/common/utils/ContainerCache.java |  14 +--
 .../container/common/utils/ReferenceCountedDB.java |  28 ++---
 .../ozone/container/common/TestContainerCache.java | 128 +
 3 files changed, 145 insertions(+), 25 deletions(-)

diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ContainerCache.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ContainerCache.java
index ef75ec1..d25e53b 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ContainerCache.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ContainerCache.java
@@ -77,7 +77,8 @@ public final class ContainerCache extends LRUMap {
   while (iterator.hasNext()) {
 iterator.next();
 ReferenceCountedDB db = (ReferenceCountedDB) iterator.getValue();
-db.setEvicted(true);
+Preconditions.checkArgument(db.cleanup(), "refCount:",
+db.getReferenceCount());
   }
   // reset the cache
   cache.clear();
@@ -92,14 +93,9 @@ public final class ContainerCache extends LRUMap {
   @Override
   protected boolean removeLRU(LinkEntry entry) {
 ReferenceCountedDB db = (ReferenceCountedDB) entry.getValue();
-String dbFile = (String)entry.getKey();
 lock.lock();
 try {
-  db.setEvicted(false);
-  return true;
-} catch (Exception e) {
-  LOG.error("Eviction for db:{} failed", dbFile, e);
-  return false;
+  return db.cleanup();
 } finally {
   lock.unlock();
 }
@@ -156,8 +152,8 @@ public final class ContainerCache extends LRUMap {
 try {
   ReferenceCountedDB db = (ReferenceCountedDB)this.get(containerDBPath);
   if (db != null) {
-// marking it as evicted will close the db as well.
-db.setEvicted(true);
+Preconditions.checkArgument(db.cleanup(), "refCount:",
+db.getReferenceCount());
   }
   this.remove(containerDBPath);
 } finally {
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ReferenceCountedDB.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ReferenceCountedDB.java
index 31aca64..81cde5b 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ReferenceCountedDB.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ReferenceCountedDB.java
@@ -24,7 +24,6 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.Closeable;
-import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 
 /**
@@ -38,17 +37,19 @@ public class ReferenceCountedDB implements Closeable {
   private static final Logger LOG =
   LoggerFactory.getLogger(ReferenceCountedDB.class);
   private final AtomicInteger referenceCount;
-  private final AtomicBoolean isEvicted;
   private final MetadataStore store;
   private final String containerDBPath;
 
   public ReferenceCountedDB(MetadataStore store, String containerDBPath) {
 this.referenceCount = new AtomicInteger(0);
-this.isEvicted = new AtomicBoolean(false);
 this.store = store;
 this.containerDBPath = containerDBPath;
   }
 
+  public long getReferenceCount() {
+return referenceCount.get();
+  }
+
   public void incrementReference() {
 this.referenceCount.incrementAndGet();
 if (LOG.isDebugEnabled()) {
@@ -59,35 +60,30 @@ public class ReferenceCountedDB implements Closeable {
   }
 
   public void decrementReference() {
-this.referenceCount.decrementAndGet();
+int refCount = this.referenceCount.decrementAndGet();
+Preconditions.checkArgument(refCount >= 0, "refCount:", refCount);
 if (LOG.isDebugEnabled()) {
   LOG.debug("DecRef {} to refCnt {} \n", containerDBPath,
   referenceCount.get());
   new Exception().printStackTrace();
 }
-cleanup();
-  }
-
-  public void setEvicted(boolean checkNoReferences) {
-Preconditions.checkState(!checkNoReferences ||
-(referenceCount.get() == 0),
-"checkNoReferences:%b, referencount:%d, dbPath:%s",
-checkNoReferences, referenceCount.get(), containerDBPath);
-isEvicted.set(true

[hadoop] branch trunk updated: YARN-9595. FPGA plugin: NullPointerException in FpgaNodeResourceUpdateHandler.updateConfiguredResource(). Contributed by Peter Bacsko.

2019-06-03 Thread ztang
This is an automated email from the ASF dual-hosted git repository.

ztang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 606061a  YARN-9595. FPGA plugin: NullPointerException in 
FpgaNodeResourceUpdateHandler.updateConfiguredResource(). Contributed by Peter 
Bacsko.
606061a is described below

commit 606061aa147dc6d619d6240b7ea31d8f8f220e5d
Author: Zhankun Tang 
AuthorDate: Tue Jun 4 09:56:59 2019 +0800

YARN-9595. FPGA plugin: NullPointerException in 
FpgaNodeResourceUpdateHandler.updateConfiguredResource(). Contributed by Peter 
Bacsko.
---
 .../resourceplugin/fpga/FpgaDiscoverer.java|  5 ++--
 .../resourceplugin/fpga/TestFpgaDiscoverer.java| 33 ++
 2 files changed, 36 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java
index 185effa..180a011 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java
@@ -124,6 +124,7 @@ public class FpgaDiscoverer {
 
 if (allowed == null || allowed.equalsIgnoreCase(
 YarnConfiguration.AUTOMATICALLY_DISCOVER_GPU_DEVICES)) {
+  currentFpgaInfo = ImmutableList.copyOf(list);
   return list;
 } else if (allowed.matches("(\\d,)*\\d")){
   Set minors = Sets.newHashSet(allowed.split(","));
@@ -134,6 +135,8 @@ public class FpgaDiscoverer {
 .filter(dev -> minors.contains(String.valueOf(dev.getMinor(
 .collect(Collectors.toList());
 
+  currentFpgaInfo = ImmutableList.copyOf(list);
+
   // if the count of user configured is still larger than actual
   if (list.size() != minors.size()) {
 LOG.warn("We continue although there're mistakes in user's 
configuration " +
@@ -145,8 +148,6 @@ public class FpgaDiscoverer {
   YarnConfiguration.NM_FPGA_ALLOWED_DEVICES + ":\"" + allowed + "\"");
 }
 
-currentFpgaInfo = ImmutableList.copyOf(list);
-
 return list;
   }
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/TestFpgaDiscoverer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/TestFpgaDiscoverer.java
index 92e9db2..6f570c6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/TestFpgaDiscoverer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/TestFpgaDiscoverer.java
@@ -288,6 +288,39 @@ public class TestFpgaDiscoverer {
 }
   }
 
+  @Test
+  public void testCurrentFpgaInfoWhenAllDevicesAreAllowed()
+  throws YarnException {
+conf.set(YarnConfiguration.NM_FPGA_AVAILABLE_DEVICES,
+"acl0/243:0,acl1/244:1");
+
+fpgaDiscoverer.initialize(conf);
+List devices = fpgaDiscoverer.discover();
+List currentFpgaInfo = fpgaDiscoverer.getCurrentFpgaInfo();
+
+assertEquals("Devices", devices, currentFpgaInfo);
+  }
+
+  @Test
+  public void testCurrentFpgaInfoWhenAllowedDevicesDefined()
+  throws YarnException {
+conf.set(YarnConfiguration.NM_FPGA_AVAILABLE_DEVICES,
+"acl0/243:0,acl1/244:1");
+conf.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, "0");
+
+fpgaDiscoverer.initialize(conf);
+List devices = fpgaDiscoverer.discover();
+List currentFpgaInfo = fpgaDiscoverer.getCurrentFpgaInfo();
+
+assertEquals("Devices", devices, currentFpgaInfo);
+assertEquals("List of devices", 1, currentFpgaInfo.size());
+
+FpgaDevice device = currentFpgaInfo.get(0);
+assertEquals("Device id", "acl0", device.getAliasDevName());
+assertEquals("Minor number", 0, device.getMinor());
+assertEquals("Major", 243, device.getMajor());
+  }
+
   private IntelFpgaOpenclPlugin.InnerShellExecutor mockPuginShell() {
 IntelFpgaOpenclPlugin.InnerShellExecutor shell = 
mock(IntelFpgaOpenclPlugin.InnerShellExecutor.class);
 when(sh

[hadoop] branch HDFS-13891 updated: HDFS-14508. RBF: Clean-up and refactor UI components. Contributed by Takanobu Asanuma.

2019-06-03 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new d60e686  HDFS-14508. RBF: Clean-up and refactor UI components. 
Contributed by Takanobu Asanuma.
d60e686 is described below

commit d60e686859a7e7328768a67a92c5237ec486fdfa
Author: Ayush Saxena 
AuthorDate: Tue Jun 4 08:40:31 2019 +0530

HDFS-14508. RBF: Clean-up and refactor UI components. Contributed by 
Takanobu Asanuma.
---
 .../server/federation/metrics/FederationMBean.java |  29 +-
 .../federation/metrics/NamenodeBeanMetrics.java|  73 ++-
 .../{FederationMetrics.java => RBFMetrics.java}|  56 +--
 .../server/federation/metrics/RouterMBean.java | 104 +
 .../hdfs/server/federation/router/Router.java  |   6 +-
 .../federation/router/RouterMetricsService.java|  14 +--
 .../src/main/webapps/router/federationhealth.html  |   8 +-
 .../src/main/webapps/router/federationhealth.js|   3 +-
 .../TestRouterHDFSContractDelegationToken.java |   8 +-
 ...tFederationMetrics.java => TestRBFMetrics.java} |  38 
 .../federation/router/TestDisableNameservices.java |   4 +-
 .../federation/router/TestRouterAdminCLI.java  |   6 +-
 12 files changed, 251 insertions(+), 98 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
index 53b2703..5fa4755 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
@@ -193,66 +193,87 @@ public interface FederationMBean {
   /**
* When the router started.
* @return Date as a string the router started.
+   * @deprecated Use {@link RouterMBean#getRouterStarted()} instead.
*/
+  @Deprecated
   String getRouterStarted();
 
   /**
* Get the version of the router.
* @return Version of the router.
+   * @deprecated Use {@link RouterMBean#getVersion()} instead.
*/
+  @Deprecated
   String getVersion();
 
   /**
* Get the compilation date of the router.
* @return Compilation date of the router.
+   * @deprecated Use {@link RouterMBean#getCompiledDate()} instead.
*/
+  @Deprecated
   String getCompiledDate();
 
   /**
* Get the compilation info of the router.
* @return Compilation info of the router.
+   * @deprecated Use {@link RouterMBean#getCompileInfo()} instead.
*/
+  @Deprecated
   String getCompileInfo();
 
   /**
* Get the host and port of the router.
* @return Host and port of the router.
+   * @deprecated Use {@link RouterMBean#getHostAndPort()} instead.
*/
+  @Deprecated
   String getHostAndPort();
 
   /**
* Get the identifier of the router.
* @return Identifier of the router.
+   * @deprecated Use {@link RouterMBean#getRouterId()} instead.
*/
+  @Deprecated
   String getRouterId();
 
   /**
-   * Get the host and port of the router.
-   * @return Host and port of the router.
+   * Gets the cluster ids of the namenodes.
+   * @return the cluster ids of the namenodes.
+   * @deprecated Use {@link RouterMBean#getClusterId()} instead.
*/
   String getClusterId();
 
   /**
-   * Get the host and port of the router.
-   * @return Host and port of the router.
+   * Gets the block pool ids of the namenodes.
+   * @return the block pool ids of the namenodes.
+   * @deprecated Use {@link RouterMBean#getBlockPoolId()} instead.
*/
+  @Deprecated
   String getBlockPoolId();
 
   /**
* Get the current state of the router.
* @return String label for the current router state.
+   * @deprecated Use {@link RouterMBean#getRouterStatus()} instead.
*/
+  @Deprecated
   String getRouterStatus();
 
   /**
* Get the current number of delegation tokens in memory.
* @return number of DTs
+   * @deprecated Use {@link RouterMBean#getCurrentTokensCount()} instead.
*/
+  @Deprecated
   long getCurrentTokensCount();
 
   /**
* Get the security status of the router.
* @return Security status.
+   * @deprecated Use {@link RouterMBean#isSecurityEnabled()} instead.
*/
+  @Deprecated
   boolean isSecurityEnabled();
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 50ec175..6d26aa0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ 
b/ha

[hadoop] branch docker-hadoop-runner-jdk11 updated: HDDS-1632. Make the hadoop home word readable and avoid sudo in hadoop-runner.

2019-06-03 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch docker-hadoop-runner-jdk11
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/docker-hadoop-runner-jdk11 by 
this push:
 new 22597d3  HDDS-1632. Make the hadoop home word readable and avoid sudo 
in hadoop-runner.
22597d3 is described below

commit 22597d33acf67d1c0f963e2c79f7534e9dc5bf0d
Author: Elek, Márton 
AuthorDate: Tue Jun 4 08:15:52 2019 +0200

HDDS-1632. Make the hadoop home word readable and avoid sudo in 
hadoop-runner.
---
 Dockerfile | 4 ++--
 scripts/starter.sh | 5 +
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/Dockerfile b/Dockerfile
index 20f5d31..fd7c40c 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -31,6 +31,7 @@ ENV PATH $PATH:/opt/hadoop/bin
 
 RUN groupadd --gid 1000 hadoop
 RUN useradd --uid 1000 hadoop --gid 100 --home /opt/hadoop
+RUN chmod 755 /opt/hadoop
 RUN echo "hadoop ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
 RUN chown hadoop /opt
 ADD scripts /opt/
@@ -40,7 +41,6 @@ RUN mkdir -p /etc/hadoop && mkdir -p /var/log/hadoop && chmod 
1777 /etc/hadoop &
 ENV HADOOP_LOG_DIR=/var/log/hadoop
 ENV HADOOP_CONF_DIR=/etc/hadoop
 WORKDIR /opt/hadoop
-
-VOLUME /data
+RUN mkdir /data && chmod 1777 /data
 USER hadoop
 ENTRYPOINT ["/usr/local/bin/dumb-init", "--", "/opt/starter.sh"]
diff --git a/scripts/starter.sh b/scripts/starter.sh
index 1328607..6b5bbe2 100755
--- a/scripts/starter.sh
+++ b/scripts/starter.sh
@@ -96,9 +96,6 @@ if [ -n "$KERBEROS_ENABLED" ]; then
 sed "s/SERVER/$KERBEROS_SERVER/g" "$DIR"/krb5.conf | sudo tee 
/etc/krb5.conf
 fi
 
-#To avoid docker volume permission problems
-sudo chmod o+rwx /data
-
 "$DIR"/envtoconf.py --destination "${HADOOP_CONF_DIR:-/opt/hadoop/etc/hadoop}"
 
 if [ -n "$ENSURE_NAMENODE_DIR" ]; then
@@ -139,7 +136,7 @@ if [ -n "$BYTEMAN_SCRIPT" ] || [ -n "$BYTEMAN_SCRIPT_URL" 
]; then
   export PATH=$PATH:$BYTEMAN_DIR/bin
 
   if [ ! -z "$BYTEMAN_SCRIPT_URL" ]; then
-sudo wget $BYTEMAN_SCRIPT_URL -O /tmp/byteman.btm
+wget $BYTEMAN_SCRIPT_URL -O /tmp/byteman.btm
 export BYTEMAN_SCRIPT=/tmp/byteman.btm
   fi
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1607. Create smoketest for non-secure mapreduce example (#869)

2019-06-03 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1fc359f  HDDS-1607. Create smoketest for non-secure mapreduce example 
(#869)
1fc359f is described below

commit 1fc359fc101b3ff90c95d22a3f4cfa78b65ae47d
Author: Elek, Márton 
AuthorDate: Tue Jun 4 08:18:02 2019 +0200

HDDS-1607. Create smoketest for non-secure mapreduce example (#869)

* HDDS-1607. Create smoketest for non-secure mapreduce example.

* remove hardcoded project version
---
 hadoop-ozone/dist/src/main/compose/ozone-mr/.env   |  19 +++
 .../src/main/compose/ozone-mr/docker-compose.yaml  |  95 +++
 .../dist/src/main/compose/ozone-mr/docker-config   | 130 +
 .../dist/src/main/compose/ozone-mr/test.sh |  36 ++
 .../dist/src/main/smoketest/createmrenv.robot  |  48 
 .../dist/src/main/smoketest/mapreduce.robot|  37 ++
 6 files changed, 365 insertions(+)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone-mr/.env 
b/hadoop-ozone/dist/src/main/compose/ozone-mr/.env
new file mode 100644
index 000..ba24fed
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/ozone-mr/.env
@@ -0,0 +1,19 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+HDDS_VERSION=${hdds.version}
+HADOOP_IMAGE=apache/hadoop
+HADOOP_VERSION=3
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-mr/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-mr/docker-compose.yaml
new file mode 100644
index 000..1a7f872
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/ozone-mr/docker-compose.yaml
@@ -0,0 +1,95 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+version: "3"
+services:
+  datanode:
+image: apache/hadoop-runner
+volumes:
+  - ../..:/opt/hadoop
+ports:
+  - 9864
+command: ["/opt/hadoop/bin/ozone","datanode"]
+env_file:
+  - docker-config
+  om:
+image: apache/hadoop-runner
+hostname: om
+volumes:
+  - ../..:/opt/hadoop
+ports:
+  - 9874:9874
+environment:
+  WAITFOR: scm:9876
+  ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION
+env_file:
+  - docker-config
+command: ["/opt/hadoop/bin/ozone","om"]
+  s3g:
+image: apache/hadoop-runner
+hostname: s3g
+volumes:
+  - ../..:/opt/hadoop
+ports:
+  - 9878:9878
+env_file:
+  - ./docker-config
+command: ["/opt/hadoop/bin/ozone","s3g"]
+  scm:
+image: apache/hadoop-runner:latest
+hostname: scm
+volumes:
+  - ../..:/opt/hadoop
+ports:
+  - 9876:9876
+env_file:
+  - docker-config
+environment:
+  ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
+command: ["/opt/hadoop/bin/ozone","scm"]
+  rm:
+image: ${HADOOP_IMAGE}:${HADOOP_VERSION}
+hostname: rm
+volumes:
+  - ../..:/opt/ozone
+ports:
+  - 8088:8088
+env_file:
+  - ./docker-config
+environment:
+  HADOOP_CLASSPATH: 
/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib-current-@project.version@.jar
+command: ["yarn", "resourcemanager"]
+  nm:
+image: ${HADOOP_IMAGE}:${HADOOP_VERSION}
+hostname: nm
+volumes:
+  - ../..:/opt/ozone
+env_file:
+  - ./docker-config
+environment:
+  HADOOP_CLASSPATH: 
/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib

[hadoop] branch trunk updated: HDDS-1629. Tar file creation can be optional for non-dist builds. Contributed by Elek, Marton. (#887)

2019-06-03 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e140a45  HDDS-1629. Tar file creation can be optional for non-dist 
builds. Contributed by Elek, Marton. (#887)
e140a45 is described below

commit e140a450465c903217c73942f1d9200ea7f27570
Author: Elek, Márton 
AuthorDate: Tue Jun 4 08:20:45 2019 +0200

HDDS-1629. Tar file creation can be optional for non-dist builds. 
Contributed by Elek, Marton. (#887)
---
 hadoop-ozone/dist/pom.xml | 49 ++-
 1 file changed, 31 insertions(+), 18 deletions(-)

diff --git a/hadoop-ozone/dist/pom.xml b/hadoop-ozone/dist/pom.xml
index 046f89c..855fab8 100644
--- a/hadoop-ozone/dist/pom.xml
+++ b/hadoop-ozone/dist/pom.xml
@@ -225,24 +225,6 @@
   
 
   
-  
-tar-ozone
-package
-
-  exec
-
-
-  ${shell-executable}
-  ${project.build.directory}
-  
-  
-${basedir}/dev-support/bin/dist-tar-stitching
-
-${hdds.version}
-${project.build.directory}
-  
-
-  
 
   
   

[hadoop] branch docker-hadoop-runner-jdk11 updated: HDDS-1633. Update rat from 0.12 to 0.13 in hadoop-runner build script (#891)

2019-06-03 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch docker-hadoop-runner-jdk11
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/docker-hadoop-runner-jdk11 by 
this push:
 new cef9966  HDDS-1633. Update rat from 0.12 to 0.13 in hadoop-runner 
build script (#891)
cef9966 is described below

commit cef996659f43d165ca8ba451074a4eac21dcf525
Author: Elek, Márton 
AuthorDate: Tue Jun 4 08:22:12 2019 +0200

HDDS-1633. Update rat from 0.12 to 0.13 in hadoop-runner build script (#891)
---
 build.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/build.sh b/build.sh
index 1181708..f7889df 100755
--- a/build.sh
+++ b/build.sh
@@ -18,10 +18,10 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
 set -e
 mkdir -p build
 if [ ! -d "$DIR/build/apache-rat-0.12" ]; then
-   wget 
http://xenia.sote.hu/ftp/mirrors/www.apache.org/creadur/apache-rat-0.12/apache-rat-0.12-bin.tar.gz
 -O $DIR/build/apache-rat.tar.gz
+   wget 
'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=creadur/apache-rat-0.13/apache-rat-0.13-bin.tar.gz'
 -O $DIR/build/apache-rat.tar.gz
cd $DIR/build
tar zvxf apache-rat.tar.gz
cd -
 fi
-java -jar $DIR/build/apache-rat-0.12/apache-rat-0.12.jar $DIR -e public -e 
apache-rat-0.12 -e .git -e .gitignore
+java -jar $DIR/build/apache-rat-0.13/apache-rat-0.13.jar $DIR -e public -e 
apache-rat-0.12 -e .git -e .gitignore
 docker build -t apache/hadoop-runner .


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1631. Fix auditparser smoketests (#892)

2019-06-03 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 5d5081e  HDDS-1631. Fix auditparser smoketests (#892)
5d5081e is described below

commit 5d5081eff8e898b5f16481dd87891c11763a0ec8
Author: Elek, Márton 
AuthorDate: Tue Jun 4 08:30:43 2019 +0200

HDDS-1631. Fix auditparser smoketests (#892)
---
 hadoop-ozone/dist/src/main/compose/ozone/test.sh   | 2 +-
 hadoop-ozone/dist/src/main/smoketest/auditparser/auditparser.robot | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone/test.sh 
b/hadoop-ozone/dist/src/main/compose/ozone/test.sh
index 1c90175..f36fb48 100755
--- a/hadoop-ozone/dist/src/main/compose/ozone/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozone/test.sh
@@ -26,7 +26,7 @@ start_docker_env
 #Due to the limitation of the current auditparser test, it should be the
 #first test in a clean cluster.
 
-execute_robot_test scm auditparser
+execute_robot_test om auditparser
 
 execute_robot_test scm basic/basic.robot
 
diff --git a/hadoop-ozone/dist/src/main/smoketest/auditparser/auditparser.robot 
b/hadoop-ozone/dist/src/main/smoketest/auditparser/auditparser.robot
index a4b0b7a..30790ec 100644
--- a/hadoop-ozone/dist/src/main/smoketest/auditparser/auditparser.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/auditparser/auditparser.robot
@@ -36,8 +36,9 @@ Initiating freon to generate data
Should Not Contain   ${result}  ERROR
 
 Testing audit parser
-${logfile} =   Execute  ls -t /opt/hadoop/logs | grep 
om-audit | head -1
-   Execute  ozone auditparser 
/opt/hadoop/audit.db load "/opt/hadoop/logs/${logfile}"
+${logdir} =Get Environment Variable  HADOOP_LOG_DIR 
/var/log/hadoop
+${logfile} =   Execute  ls -t "${logdir}" | grep om-audit 
| head -1
+   Execute  ozone auditparser 
/opt/hadoop/audit.db load "${logdir}/${logfile}"
 ${result} =Execute  ozone auditparser 
/opt/hadoop/audit.db template top5cmds
Should Contain   ${result}  ALLOCATE_KEY
 ${result} =Execute  ozone auditparser 
/opt/hadoop/audit.db template top5users


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: SUBMARINE-82. Fix english grammar mistakes in documentation. Contributed by Szilard Nemeth.

2019-06-03 Thread ztang
This is an automated email from the ASF dual-hosted git repository.

ztang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7991159  SUBMARINE-82. Fix english grammar mistakes in documentation. 
Contributed by Szilard Nemeth.
7991159 is described below

commit 799115967d6e1a4074d0186b06b4eb97251a19df
Author: Zhankun Tang 
AuthorDate: Tue Jun 4 14:44:37 2019 +0800

SUBMARINE-82. Fix english grammar mistakes in documentation. Contributed by 
Szilard Nemeth.
---
 .../src/site/markdown/Examples.md  |  2 +-
 .../src/site/markdown/HowToInstall.md  | 24 +++
 .../src/site/markdown/Index.md | 14 ++--
 .../src/site/markdown/InstallationGuide.md | 79 +-
 .../src/site/markdown/QuickStart.md| 29 
 .../markdown/RunningDistributedCifar10TFJobs.md| 16 ++---
 .../src/site/markdown/TestAndTroubleshooting.md|  8 +--
 7 files changed, 95 insertions(+), 77 deletions(-)

diff --git 
a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Examples.md 
b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Examples.md
index b66b32d..fd61e83 100644
--- a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Examples.md
+++ b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Examples.md
@@ -14,7 +14,7 @@
 
 # Examples
 
-Here're some examples about Submarine usage.
+Here are some examples about how to use Submarine:
 
 [Running Distributed CIFAR 10 Tensorflow 
Job](RunningDistributedCifar10TFJobs.html)
 
diff --git 
a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/HowToInstall.md 
b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/HowToInstall.md
index 65e56ea..af96d6d 100644
--- a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/HowToInstall.md
+++ b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/HowToInstall.md
@@ -14,23 +14,23 @@
 
 # How to Install Dependencies
 
-Submarine project uses YARN Service, Docker container, and GPU (when GPU 
hardware available and properly configured).
+Submarine project uses YARN Service, Docker container and GPU.
+GPU could only be used if a GPU hardware is available and properly configured.
 
-That means as an admin, you have to properly setup YARN Service related 
dependencies, including:
+As an administrator, you have to properly setup YARN Service related 
dependencies, including:
 - YARN Registry DNS
+- Docker related dependencies, including:
+  - Docker binary with expected versions
+  - Docker network that allows Docker containers to talk to each other across 
different nodes
 
-Docker related dependencies, including:
-- Docker binary with expected versions.
-- Docker network which allows Docker container can talk to each other across 
different nodes.
+If you would like to use GPU, you need to set up:
+- GPU Driver
+- Nvidia-docker
 
-And when GPU wanna to be used:
-- GPU Driver.
-- Nvidia-docker.
-
-For your convenience, we provided installation documents to help you to setup 
your environment. You can always choose to have them installed in your own way.
+For your convenience, we provided some installation documents to help you 
setup your environment. You can always choose to have them installed in your 
own way.
 
 Use Submarine installer to install dependencies: 
[EN](https://github.com/hadoopsubmarine/hadoop-submarine-ecosystem/tree/master/submarine-installer)
 
[CN](https://github.com/hadoopsubmarine/hadoop-submarine-ecosystem/blob/master/submarine-installer/README-CN.md)
 
-Alternatively, you can follow manual install dependencies: 
[EN](InstallationGuide.html) [CN](InstallationGuideChineseVersion.html)
+Alternatively, you can follow this guide to manually install dependencies: 
[EN](InstallationGuide.html) [CN](InstallationGuideChineseVersion.html)
 
-Once you have installed dependencies, please follow following guide to 
[TestAndTroubleshooting](TestAndTroubleshooting.html).  
\ No newline at end of file
+Once you have installed all the dependencies, please follow this guide: 
[TestAndTroubleshooting](TestAndTroubleshooting.html).
\ No newline at end of file
diff --git a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Index.md 
b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Index.md
index d11fa45..e2c7979 100644
--- a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Index.md
+++ b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Index.md
@@ -21,20 +21,20 @@ Goals of Submarine:
 
 - Can launch services to serve Tensorflow/MXNet models.
 
-- Support run distributed Tensorflow jobs with simple configs.
+- Supports running distributed Tensorflow jobs with simple configs.
 
-- Support run standalone PyTorch jobs with simple configs.
+- Supports running standalone PyTorch jobs with simple configs.
 
-- Support run user-specified Docker images.
+- Supports runnin

[hadoop] branch submarine-0.2 updated: SUBMARINE-82. Fix english grammar mistakes in documentation. Contributed by Szilard Nemeth.

2019-06-03 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch submarine-0.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/submarine-0.2 by this push:
 new c177cc9  SUBMARINE-82. Fix english grammar mistakes in documentation. 
Contributed by Szilard Nemeth.
c177cc9 is described below

commit c177cc97508743f7e112876a97280554c01813a4
Author: Zhankun Tang 
AuthorDate: Tue Jun 4 14:44:37 2019 +0800

SUBMARINE-82. Fix english grammar mistakes in documentation. Contributed by 
Szilard Nemeth.

(cherry picked from commit 799115967d6e1a4074d0186b06b4eb97251a19df)
---
 .../src/site/markdown/Examples.md  |  2 +-
 .../src/site/markdown/HowToInstall.md  | 24 +++
 .../src/site/markdown/Index.md | 14 ++--
 .../src/site/markdown/InstallationGuide.md | 79 +-
 .../src/site/markdown/QuickStart.md| 29 
 .../markdown/RunningDistributedCifar10TFJobs.md| 16 ++---
 .../src/site/markdown/TestAndTroubleshooting.md|  8 +--
 7 files changed, 95 insertions(+), 77 deletions(-)

diff --git 
a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Examples.md 
b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Examples.md
index b66b32d..fd61e83 100644
--- a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Examples.md
+++ b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Examples.md
@@ -14,7 +14,7 @@
 
 # Examples
 
-Here're some examples about Submarine usage.
+Here are some examples about how to use Submarine:
 
 [Running Distributed CIFAR 10 Tensorflow 
Job](RunningDistributedCifar10TFJobs.html)
 
diff --git 
a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/HowToInstall.md 
b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/HowToInstall.md
index 65e56ea..af96d6d 100644
--- a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/HowToInstall.md
+++ b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/HowToInstall.md
@@ -14,23 +14,23 @@
 
 # How to Install Dependencies
 
-Submarine project uses YARN Service, Docker container, and GPU (when GPU 
hardware available and properly configured).
+Submarine project uses YARN Service, Docker container and GPU.
+GPU could only be used if a GPU hardware is available and properly configured.
 
-That means as an admin, you have to properly setup YARN Service related 
dependencies, including:
+As an administrator, you have to properly setup YARN Service related 
dependencies, including:
 - YARN Registry DNS
+- Docker related dependencies, including:
+  - Docker binary with expected versions
+  - Docker network that allows Docker containers to talk to each other across 
different nodes
 
-Docker related dependencies, including:
-- Docker binary with expected versions.
-- Docker network which allows Docker container can talk to each other across 
different nodes.
+If you would like to use GPU, you need to set up:
+- GPU Driver
+- Nvidia-docker
 
-And when GPU wanna to be used:
-- GPU Driver.
-- Nvidia-docker.
-
-For your convenience, we provided installation documents to help you to setup 
your environment. You can always choose to have them installed in your own way.
+For your convenience, we provided some installation documents to help you 
setup your environment. You can always choose to have them installed in your 
own way.
 
 Use Submarine installer to install dependencies: 
[EN](https://github.com/hadoopsubmarine/hadoop-submarine-ecosystem/tree/master/submarine-installer)
 
[CN](https://github.com/hadoopsubmarine/hadoop-submarine-ecosystem/blob/master/submarine-installer/README-CN.md)
 
-Alternatively, you can follow manual install dependencies: 
[EN](InstallationGuide.html) [CN](InstallationGuideChineseVersion.html)
+Alternatively, you can follow this guide to manually install dependencies: 
[EN](InstallationGuide.html) [CN](InstallationGuideChineseVersion.html)
 
-Once you have installed dependencies, please follow following guide to 
[TestAndTroubleshooting](TestAndTroubleshooting.html).  
\ No newline at end of file
+Once you have installed all the dependencies, please follow this guide: 
[TestAndTroubleshooting](TestAndTroubleshooting.html).
\ No newline at end of file
diff --git a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Index.md 
b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Index.md
index d11fa45..e2c7979 100644
--- a/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Index.md
+++ b/hadoop-submarine/hadoop-submarine-core/src/site/markdown/Index.md
@@ -21,20 +21,20 @@ Goals of Submarine:
 
 - Can launch services to serve Tensorflow/MXNet models.
 
-- Support run distributed Tensorflow jobs with simple configs.
+- Supports running distributed Tensorflow jobs with simple configs.
 
-- Support run standalone PyTorch jobs with simple configs.
+- Supports running standalone PyTo

[hadoop] 01/01: Preparing for submarine-0.2.0 release

2019-06-03 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch submarine-0.2.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 214c2104ed9cc05fbccecb809855e5054675725f
Author: Sunil G 
AuthorDate: Tue Jun 4 12:20:34 2019 +0530

Preparing for submarine-0.2.0 release
---
 hadoop-submarine/hadoop-submarine-all/pom.xml | 4 ++--
 hadoop-submarine/hadoop-submarine-core/pom.xml| 4 ++--
 hadoop-submarine/hadoop-submarine-dist/pom.xml| 4 ++--
 hadoop-submarine/hadoop-submarine-tony-runtime/pom.xml| 6 +++---
 hadoop-submarine/hadoop-submarine-yarnservice-runtime/pom.xml | 8 
 hadoop-submarine/pom.xml  | 2 +-
 6 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/hadoop-submarine/hadoop-submarine-all/pom.xml 
b/hadoop-submarine/hadoop-submarine-all/pom.xml
index ade3dfd..d2ebf36 100644
--- a/hadoop-submarine/hadoop-submarine-all/pom.xml
+++ b/hadoop-submarine/hadoop-submarine-all/pom.xml
@@ -20,7 +20,7 @@
   
 hadoop-submarine
 org.apache.hadoop
-0.2.1-SNAPSHOT
+0.2.0
   
   ${project.artifactId}
   ${project.version}
@@ -30,7 +30,7 @@
 
 ${project.parent.parent.basedir}
 hadoop-submarine-all
-0.2.1-SNAPSHOT
+0.2.0
   
 
   
diff --git a/hadoop-submarine/hadoop-submarine-core/pom.xml 
b/hadoop-submarine/hadoop-submarine-core/pom.xml
index 1383577..e636ec5 100644
--- a/hadoop-submarine/hadoop-submarine-core/pom.xml
+++ b/hadoop-submarine/hadoop-submarine-core/pom.xml
@@ -20,10 +20,10 @@
   
 hadoop-submarine
 org.apache.hadoop
-0.2.1-SNAPSHOT
+0.2.0
   
   hadoop-submarine-core
-  0.2.1-SNAPSHOT
+  0.2.0
   Hadoop Submarine Core
 
   
diff --git a/hadoop-submarine/hadoop-submarine-dist/pom.xml 
b/hadoop-submarine/hadoop-submarine-dist/pom.xml
index e5684f6..76a54a6 100644
--- a/hadoop-submarine/hadoop-submarine-dist/pom.xml
+++ b/hadoop-submarine/hadoop-submarine-dist/pom.xml
@@ -20,7 +20,7 @@
   
 hadoop-submarine
 org.apache.hadoop
-0.2.1-SNAPSHOT
+0.2.0
   
   ${project.artifactId}
   ${project.version}
@@ -31,7 +31,7 @@
 
 ${project.parent.parent.basedir}
 hadoop-submarine-dist
-0.2.1-SNAPSHOT
+0.2.0
   
 
   
diff --git a/hadoop-submarine/hadoop-submarine-tony-runtime/pom.xml 
b/hadoop-submarine/hadoop-submarine-tony-runtime/pom.xml
index 3e7c4d0..00d1e03 100644
--- a/hadoop-submarine/hadoop-submarine-tony-runtime/pom.xml
+++ b/hadoop-submarine/hadoop-submarine-tony-runtime/pom.xml
@@ -18,7 +18,7 @@
 
 hadoop-submarine
 org.apache.hadoop
-0.2.1-SNAPSHOT
+0.2.0
 
 4.0.0
 
@@ -28,7 +28,7 @@
 
 org.apache.hadoop
 hadoop-submarine-core
-0.2.1-SNAPSHOT
+0.2.0
 compile
 
 
@@ -59,7 +59,7 @@
 hadoop-submarine-core
 test-jar
 test
-0.2.1-SNAPSHOT
+0.2.0
 
 
 org.mockito
diff --git a/hadoop-submarine/hadoop-submarine-yarnservice-runtime/pom.xml 
b/hadoop-submarine/hadoop-submarine-yarnservice-runtime/pom.xml
index fb2703c..6253b16 100644
--- a/hadoop-submarine/hadoop-submarine-yarnservice-runtime/pom.xml
+++ b/hadoop-submarine/hadoop-submarine-yarnservice-runtime/pom.xml
@@ -20,10 +20,10 @@
   
 hadoop-submarine
 org.apache.hadoop
-0.2.1-SNAPSHOT
+0.2.0
   
   hadoop-submarine-yarnservice-runtime
-  0.2.1-SNAPSHOT
+  0.2.0
   Hadoop Submarine YARN Service Runtime
 
   
@@ -98,12 +98,12 @@
   hadoop-submarine-core
   test-jar
   test
-  0.2.1-SNAPSHOT
+  0.2.0
 
 
   org.apache.hadoop
   hadoop-submarine-core
-  0.2.1-SNAPSHOT
+  0.2.0
 
 
   org.apache.hadoop
diff --git a/hadoop-submarine/pom.xml b/hadoop-submarine/pom.xml
index f253e21..f997f13 100644
--- a/hadoop-submarine/pom.xml
+++ b/hadoop-submarine/pom.xml
@@ -24,7 +24,7 @@
 
   
   hadoop-submarine
-  0.2.1-SNAPSHOT
+  0.2.0
   Hadoop Submarine
   pom
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch submarine-0.2.0 created (now 214c210)

2019-06-03 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a change to branch submarine-0.2.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 214c210  Preparing for submarine-0.2.0 release

This branch includes the following new commits:

 new 214c210  Preparing for submarine-0.2.0 release

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org