[ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=303334&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303334
 ]

ASF GitHub Bot logged work on HDDS-1577:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 29/Aug/19 03:39
            Start Date: 29/Aug/19 03:39
    Worklog Time Spent: 10m 
      Work Description: xiaoyuyao commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r318874255
 
 

 ##########
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##########
 @@ -0,0 +1,237 @@
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing and 
network topology
+ * to supply pipeline creation.
+ * <p>
+ * 1. get a list of healthy nodes
+ * 2. filter out viable nodes that either don't have enough size left
+ *    or are too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ *    described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
 
 Review comment:
   This is not an issue specific to this patch. But I think the class hierarchy 
needs some adjustment. Currently:
   PipelinePlacementPolicy<-SCMCommonPolicy<-ContainerPlacementPolicy
   
   Should we change to have the SCMCommonPolicy as the base for both 
PipelinePlacementPolicy and ContainerPlacementPolicy, if there are common 
pieces between PipelinePlaceMent and ContainerPlacement, we can move them to 
them to SCMCommonPolicy.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 303334)
    Time Spent: 20m  (was: 10m)

> Add default pipeline placement policy implementation
> ----------------------------------------------------
>
>                 Key: HDDS-1577
>                 URL: https://issues.apache.org/jira/browse/HDDS-1577
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>          Components: SCM
>            Reporter: Siddharth Wagle
>            Assignee: Li Cheng
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to