[ 
https://issues.apache.org/jira/browse/HDDS-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16823359#comment-16823359
 ] 

Anu Engineer commented on HDDS-1452:
------------------------------------

Just a thought: Would it make sense to write to Data files until they become 
say 1GB? so we can take any chunk write to a file until it is large enough. 
This addresses the uses case where we are writing say 1 KB Ozone Keys. In the 
current proposal, if I write all 1 KB would we end up having 1 KB block files ? 
Just a thought since you are planning to address this issue.

> All chunks should happen to a single file for a block in datanode
> -----------------------------------------------------------------
>
>                 Key: HDDS-1452
>                 URL: https://issues.apache.org/jira/browse/HDDS-1452
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Datanode
>    Affects Versions: 0.5.0
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>            Priority: Major
>             Fix For: 0.5.0
>
>
> Currently, all chunks of a block happen to individual chunk files in 
> datanode. ThisĀ idea here is to write all individual chunks to a single file 
> in datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to