[ 
https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16569824#comment-16569824
 ] 

Mukul Kumar Singh commented on HDDS-247:
----------------------------------------

Thanks for working in this [~shashikant].

1) Changes in KeyValueHandler are not needed, they can be reverted.
2) OmKeyInfo#updateLocationInfoList, please add a comment explaining that only 
the blocks for the latest version is being removed.
3) ChunkOutputStream#copyBuffer can be replaced with this.buffer.put(buffer), 
also can we use two different names for the variables ?
4) ChunkGroupOutputStream.java. there are lots of unrelated changes due to 
indentation. removing them will help in reviewing the changes
5) ChunkGroupOutputStream:handleCloseContainerException, the return value is 
typecasted to int and then assigned to a long. Please avoid any typecasting and 
change the return variables to long.

> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---------------------------------------------------
>
>                 Key: HDDS-247
>                 URL: https://issues.apache.org/jira/browse/HDDS-247
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Client
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>            Priority: Blocker
>             Fix For: 0.2.1
>
>         Attachments: HDDS-247.00.patch, HDDS-247.01.patch, HDDS-247.02.patch, 
> HDDS-247.03.patch, HDDS-247.04.patch
>
>
> In case of ongoing writes by Ozone client to a container, the container might 
> get closed on the Datanodes because of node loss, out of space issues etc. In 
> such cases, the operation will fail with CLOSED_CONTAINER_IO exception. In 
> cases as such, ozone client should try to get the committed length of the 
> block from the Datanodes, and update the KSM. This Jira aims  to address this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to