[ 
https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-247:
-------------------------------------
    Attachment: HDDS-247.07.patch

> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---------------------------------------------------
>
>                 Key: HDDS-247
>                 URL: https://issues.apache.org/jira/browse/HDDS-247
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Client
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>            Priority: Blocker
>             Fix For: 0.2.1
>
>         Attachments: HDDS-247.00.patch, HDDS-247.01.patch, HDDS-247.02.patch, 
> HDDS-247.03.patch, HDDS-247.04.patch, HDDS-247.05.patch, HDDS-247.06.patch, 
> HDDS-247.07.patch
>
>
> In case of ongoing writes by Ozone client to a container, the container might 
> get closed on the Datanodes because of node loss, out of space issues etc. In 
> such cases, the operation will fail with CLOSED_CONTAINER_IO exception. In 
> cases as such, ozone client should try to get the committed length of the 
> block from the Datanodes, and update the OM. This Jira aims  to address this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to