[ https://issues.apache.org/jira/browse/HDFS-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040096#comment-16040096 ]
Wei-Chiu Chuang commented on HDFS-11711: ---------------------------------------- +1 too. Thanks for the patch. I think the fix is good. But I wish there's a more portable way to check for Too many open files error. > DN should not delete the block On "Too many open files" Exception > ----------------------------------------------------------------- > > Key: HDFS-11711 > URL: https://issues.apache.org/jira/browse/HDFS-11711 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Reporter: Brahma Reddy Battula > Assignee: Brahma Reddy Battula > Priority: Critical > Attachments: HDFS-11711-002.patch, HDFS-11711-003.patch, > HDFS-11711-004.patch, HDFS-11711-branch-2-002.patch, HDFS-11711.patch > > > *Seen the following scenario in one of our customer environment* > * while jobclient writing {{"job.xml"}} there are pipeline failures and > written to only one DN. > * when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as > system exceed limit) and block got deleted. Hence mapper failed to read and > job got failed. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org