[
https://issues.apache.org/jira/browse/HADOOP-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12588123#action_12588123
]
Zheng Shao commented on HADOOP-3144:
------------------------------------
Hi Dhruba,
Did you get a chance to look at the new diff?
Zheng
> better fault tolerance for corrupted text files
> -----------------------------------------------
>
> Key: HADOOP-3144
> URL: https://issues.apache.org/jira/browse/HADOOP-3144
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.15.3
> Reporter: Joydeep Sen Sarma
> Assignee: Zheng Shao
> Attachments: 3144-ignore-spaces-2.patch
>
>
> every once in a while - we encounter corrupted text files (corrupted at
> source prior to copying into hadoop). inevitably - some of the data looks
> like a really really long line and hadoop trips over trying to stuff it into
> an in memory object and gets outofmem error. Code looks same way in trunk as
> well ..
> so looking for an option to the textinputformat (and like) to ignore long
> lines. ideally - we would just skip errant lines above a certain size limit.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.