[
https://issues.apache.org/jira/browse/HADOOP-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12537633
]
Owen O'Malley commented on HADOOP-2098:
---------------------------------------
No, I don't think throwing an exception is the right answer. It is perfectly
reasonable to setup a map/reduce job that reads from a directory every 30
minutes. It should not be an error for the input directory to be empty. It
would be like "cat < /dev/null" causing an error...
> File handles for log files are still open in case of jobs with 0 maps
> ---------------------------------------------------------------------
>
> Key: HADOOP-2098
> URL: https://issues.apache.org/jira/browse/HADOOP-2098
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.15.0
> Reporter: Amar Kamat
> Assignee: Amar Kamat
> Fix For: 0.16.0
>
> Attachments: HADOOP-2098.patch
>
>
> When a job with zero maps is submitted the handle for the log file for that
> job is still open and can be seen using {{lsof}}. This over time could lead
> to {{Too many open files Exception}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.