[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328259#comment-15328259
 ] 

Kai Zheng commented on HDFS-7240:
---------------------------------

Thanks all for the discussion and [~anu] for this nice summary.

bq. To support Erasure coding, SCM will have to return more than 3 machines, 
let us say we were using 6 + 3 model of erasure coding then then a container is 
spread across nine machines. Once we modify SCM to support this model, the 
container client will have write data to the locations and update the RAFT 
state with the metadata of this block.
This looks like to support the striping erasure coding in client when 
putting/updating a k/v to the store, right? For small objects, the write will 
trigger the relatively expensive work of coding and writing to 6+3 locations, I 
would doubt about the performance/overhead and the benefit. For large objects, 
it sounds fine. So like we did for striping files, users should also be able to 
opt striping or not according to their bucket conditions, I guess.

In HDFS files, in addition to striping, there is another way to do erasure 
coding in block level as discussed in HDFS-8030, mainly targeting to convert 
old/cold data from replica into erasure coded for saving storage. In Ozone, how 
about this approach? Would we have old/cold buckets that can  be frozen and no 
update any longer? I'm not sure about this from users' point of view, but we 
might not reuse the same sets of buckets/containers across many years, right? 

> Object store in HDFS
> --------------------
>
>                 Key: HDFS-7240
>                 URL: https://issues.apache.org/jira/browse/HDFS-7240
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Jitendra Nath Pandey
>            Assignee: Jitendra Nath Pandey
>         Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to