[ https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16706077#comment-16706077 ]
Shashikant Banerjee commented on HDDS-870: ------------------------------------------ Thanks [~msingh] for the review comments. {code:java} 4) ChunkGroupOutputStream,382, this code should be moved before handleWrites, as during handleWrites, new block will be allocated and added to streamEntries, one of these blocks can be removed mistakenly here. {code} The block which gets removed corresponds to streamIndex passed as a parameter to function which corresponds to the entry which has encountered exception and needs to be removed. It does not work class variable, so we should be safe here. Moreover, it the remove code is called before handling the write, we need to handle cases like whether this is the 1st streamEntry in the list and logically data belonging to the current entry has to be written over to the next stream before we remove it from the list. Patch v4 addresses rest of your review comments. It also addresses the related test failures > Avoid creating block sized buffer in ChunkGroupOutputStream > ----------------------------------------------------------- > > Key: HDDS-870 > URL: https://issues.apache.org/jira/browse/HDDS-870 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Client > Affects Versions: 0.4.0 > Reporter: Shashikant Banerjee > Assignee: Shashikant Banerjee > Priority: Major > Fix For: 0.4.0 > > Attachments: HDDS-870.000.patch, HDDS-870.001.patch, > HDDS-870.002.patch, HDDS-870.003.patch, HDDS-870.004.patch > > > Currently, for a key, we create a block size byteBuffer in order for caching > data. This can be replaced with an array of buffers of size flush buffer size > configured for handling 2 node failures as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org