[ 
https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921587#comment-16921587
 ] 

Siddharth Wagle edited comment on HDDS-1569 at 9/3/19 5:27 PM:
---------------------------------------------------------------

[~timmylicheng] I believe by internal data structures, I meant the 
_PipelineStateMap_ but I will take a look at the doc/code and get back if any 
other data structures that need an update.

The only way new pipelines are created should through the Background pipeline 
creator job. We should not create any pipelines on client requests, in fact we 
should assume pipelines are available to SCM already and no ad-hoc pipelines 
will be created. If a single thread creates pipelines, not need to blocking 
queues or synchronization, except the utilization counters used for selecting 
pipeline need to be Atomic.  


was (Author: swagle):
[~timmylicheng] I believe by internal data structures, I meant the 
_PipelineStateMap_ but I will take a look at the doc and get back if any other 
data structures that need an update.

The only way new pipelines are created should through the Background pipeline 
creator job. We should not create any pipelines on client requests, in fact we 
should assume pipelines are available to SCM already and no ad-hoc pipelines 
will be created. If a single thread creates pipelines, not need to blocking 
queues or synchronization, except the utilization counters used for selecting 
pipeline need to be Atomic.  

> Add ability to SCM for creating multiple pipelines with same datanode
> ---------------------------------------------------------------------
>
>                 Key: HDDS-1569
>                 URL: https://issues.apache.org/jira/browse/HDDS-1569
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>          Components: SCM
>            Reporter: Siddharth Wagle
>            Assignee: Li Cheng
>            Priority: Major
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to