[ 
https://issues.apache.org/jira/browse/HADOOP-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12594445#action_12594445
 ] 

Chris Douglas commented on HADOOP-3144:
---------------------------------------

I didn't see that bytesConsumed in readLine had changed from long to int in the 
last patch. If maxBytesToConsume is set to Integer.MAX_VALUE, as many  cases 
would have it, then the overflow might still be missed (in the unlikely event 
this is hit before an OOM exception). Either bytesConsumed should be a long and 
min(bytesConsumed, Integer.MAX_VALUE) returned, or the overflow should be 
detected.

The indentation at LineRecordReader:157 is still funky.

Has this been tested against the corrupted data?

> better fault tolerance for corrupted text files
> -----------------------------------------------
>
>                 Key: HADOOP-3144
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3144
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.15.3
>            Reporter: Joydeep Sen Sarma
>            Assignee: Zheng Shao
>         Attachments: 3144-4.patch, 3144-5.patch, 3144-6.patch, 
> 3144-ignore-spaces-2.patch, 3144-ignore-spaces-3.patch
>
>
> every once in a while - we encounter corrupted text files (corrupted at 
> source prior to copying into hadoop). inevitably - some of the data looks 
> like a really really long line and hadoop trips over trying to stuff it into 
> an in memory object and gets outofmem error. Code looks same way in trunk as 
> well .. 
> so looking for an option to the textinputformat (and like) to ignore long 
> lines. ideally - we would just skip errant lines above a certain size limit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to