[ https://issues.apache.org/jira/browse/MAPREDUCE-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
bc Wong updated MAPREDUCE-5862: ------------------------------- Attachment: 0001-Handle-records-larger-than-2x-split-size.1.patch Thanks for taking a look, Jason. I fixed {{maxBytesToConsume}} instead, and added tests for the mapred variant. I added tests for compressed input for sanity sake. It's currently working. But while I'm here, why not? > Line records longer than 2x split size aren't handled correctly > --------------------------------------------------------------- > > Key: MAPREDUCE-5862 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5862 > Project: Hadoop Map/Reduce > Issue Type: Bug > Affects Versions: 2.3.0 > Reporter: bc Wong > Assignee: bc Wong > Priority: Critical > Attachments: 0001-Handle-records-larger-than-2x-split-size.1.patch, > 0001-Handle-records-larger-than-2x-split-size.patch, > 0001-Handle-records-larger-than-2x-split-size.patch, > recordSpanningMultipleSplits.txt.bz2 > > > Suppose this split (100-200) is in the middle of a record (90-240): > {noformat} > 0 100 200 300 > |---- split ----|---- curr ----|---- split ----| > <------- record -------> > 90 240 > {noformat} > > Currently, the first split would read the entire record, up to offset 240, > which is good. But the 2nd split has a bug in producing a phantom record of > (200, 240). -- This message was sent by Atlassian JIRA (v6.2#6252)