[ 
https://issues.apache.org/jira/browse/HDDS-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16823374#comment-16823374
 ] 

Anu Engineer commented on HDDS-1452:
------------------------------------

Possibly; I don't know what would be a good option; one single large file or 
RocksDB. Either way, when we do this, we need to make sure that it is not a 
single block to a single file mapping that exists. It is better to have the 
ability to control the data size of the files.

One down side with the 1KB files in RockDB is the erasure coding might become 
harder, since we can take a closed container and erasure code all data files 
and leave metadata in RockDB with erasure coding. That is my only concern with 
leaving 1 KB inside RockDB; and also we will have to benchmark how it will work 
out.

 

> All chunks should happen to a single file for a block in datanode
> -----------------------------------------------------------------
>
>                 Key: HDDS-1452
>                 URL: https://issues.apache.org/jira/browse/HDDS-1452
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Datanode
>    Affects Versions: 0.5.0
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>            Priority: Major
>             Fix For: 0.5.0
>
>
> Currently, all chunks of a block happen to individual chunk files in 
> datanode. This idea here is to write all individual chunks to a single file 
> in datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to