[ https://issues.apache.org/jira/browse/HDDS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553818#comment-16553818 ]
Mukul Kumar Singh commented on HDDS-203: ---------------------------------------- Thanks for the updated patch [~shashikant]. Apart from Nicholas comments, couple of more comments. 1) with HDDS-181, the close container should commit the pending block. Can TestCommittedBlockLengthAPI:103, be replaced with a writechunk request. So that we can check that a key is committed on closed and the committed length is correct. 2) TestCommittedBlockLengthAPI:152, the exception thrown here will result in a case where "xceiverClientManager.releaseClient(client);" is not called. > Add getCommittedBlockLength API in datanode > ------------------------------------------- > > Key: HDDS-203 > URL: https://issues.apache.org/jira/browse/HDDS-203 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Client, Ozone Datanode > Reporter: Shashikant Banerjee > Assignee: Shashikant Banerjee > Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-203.00.patch, HDDS-203.01.patch, HDDS-203.02.patch, > HDDS-203.03.patch, HDDS-203.04.patch > > > When a container gets closed on the Datanode while the active Writes are > happening by OzoneClient, Client Write requests will fail with > ContainerClosedException. In such case, ozone Client needs to enquire the > last committed block length from dataNodes and update the OzoneMaster with > the updated length for the block. This Jira proposes to add to RPC call to > get the last committed length of a block on a Datanode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org