[ 
https://issues.apache.org/jira/browse/HDFS-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16044239#comment-16044239
 ] 

Brahma Reddy Battula commented on HDFS-11711:
---------------------------------------------

bq.it should just throw a new type of exception in these two cases.
Looks this better, we can have different type of exception .Instead of deleting 
on FNFE, Validate the file existence before opening stream, and then throw  
different exception..?

> DN should not delete the block On "Too many open files" Exception
> -----------------------------------------------------------------
>
>                 Key: HDFS-11711
>                 URL: https://issues.apache.org/jira/browse/HDFS-11711
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Brahma Reddy Battula
>            Assignee: Brahma Reddy Battula
>            Priority: Critical
>             Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
>         Attachments: HDFS-11711-002.patch, HDFS-11711-003.patch, 
> HDFS-11711-004.patch, HDFS-11711-branch-2-002.patch, 
> HDFS-11711-branch-2-003.patch, HDFS-11711.patch
>
>
>  *Seen the following scenario in one of our customer environment* 
> * while jobclient writing {{"job.xml"}} there are pipeline failures and 
> written to only one DN.
> * when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as 
> system exceed limit) and block got deleted. Hence mapper failed to read and 
> job got failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to