[jira] [Updated] (HDDS-1575) Add support for storage policies to Pipelines

2020-05-31 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1575:

Target Version/s: 0.7.0

> Add support for storage policies to Pipelines
> -
>
> Key: HDDS-1575
> URL: https://issues.apache.org/jira/browse/HDDS-1575
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Siddharth Wagle
>Priority: Major
>
> The Pipeline level storage policy can be thought of segregating write 
> bandwidth for high throughput writes or random reads use cases when SSDs, 
> NVMe or RAM disks are involved by not having to figure out write pipelines on 
> the fly. The Datanode would need to read the pipeline policy and route write 
> to appropriate storage volumes. An Ozone mover can be provided for the 
> archival data use case. 
> Additionally, it makes sense to permeate the policy information to 
> Containers, this will help when allocateBlock call for a key with storage 
> policy set to ALL_SSD, to be routed to an open container on a Pipeline with 
> the same policy setting. The caveat is that at least one open pipeline needs 
> to be maintained at all times in order to support storage the policy setting. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1575) Add support for storage policies to Pipelines

2019-12-12 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1575:
-
Parent: (was: HDDS-1564)
Issue Type: Improvement  (was: Sub-task)

> Add support for storage policies to Pipelines
> -
>
> Key: HDDS-1575
> URL: https://issues.apache.org/jira/browse/HDDS-1575
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Siddharth Wagle
>Priority: Major
>
> The Pipeline level storage policy can be thought of segregating write 
> bandwidth for high throughput writes or random reads use cases when SSDs, 
> NVMe or RAM disks are involved by not having to figure out write pipelines on 
> the fly. The Datanode would need to read the pipeline policy and route write 
> to appropriate storage volumes. An Ozone mover can be provided for the 
> archival data use case. 
> Additionally, it makes sense to permeate the policy information to 
> Containers, this will help when allocateBlock call for a key with storage 
> policy set to ALL_SSD, to be routed to an open container on a Pipeline with 
> the same policy setting. The caveat is that at least one open pipeline needs 
> to be maintained at all times in order to support storage the policy setting. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org