Brahma Reddy Battula created HDFS-11711: -------------------------------------------
Summary: DN should not delete the block On "Too many open files" Exception Key: HDFS-11711 URL: https://issues.apache.org/jira/browse/HDFS-11711 Project: Hadoop HDFS Issue Type: Bug Components: datanode Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula *Seen the following scenario in one of our customer environment* * while jobclient writing {{"job.xml"}} there are pipeline failures and written to only one DN. * when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as system exceed limit) and block got deleted. Hence mapper failed to read and job got failed. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org