[ https://issues.apache.org/jira/browse/PIG-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Cheolsoo Park updated PIG-3059: ------------------------------- Attachment: PIG-3059.patch Hi Joe, I am sorry if you're working on this jira. But I was working for a customer who has issues with bad input files, so I thought that I might just solve it once for all. As per your suggestion, I moved the error handling code to {{PigRecordReader}}, so now threshold should work for any record reader including {{PigAvroRecordReader}}. I uploaded the patch to the RB for your review: https://reviews.apache.org/r/8765/ Please let me know what you think. Thanks! > Global configurable minimum 'bad record' thresholds > --------------------------------------------------- > > Key: PIG-3059 > URL: https://issues.apache.org/jira/browse/PIG-3059 > Project: Pig > Issue Type: New Feature > Components: impl > Affects Versions: 0.11 > Reporter: Russell Jurney > Assignee: Joseph Adler > Fix For: site > > Attachments: PIG-3059.patch > > > See PIG-2614. > Pig dies when one record in a LOAD of a billion records fails to parse. This > is almost certainly not the desired behavior. elephant-bird and some other > storage UDFs have minimum thresholds in terms of percent and count that must > be exceeded before a job will fail outright. > We need these limits to be configurable for Pig, globally. I've come to > realize what a major problem Pig's crashing on bad records is for new Pig > users. I believe this feature can greatly improve Pig. > An example of a config would look like: > pig.storage.bad.record.threshold=0.01 > pig.storage.bad.record.min=100 > A thorough discussion of this issue is available here: > http://www.quora.com/Big-Data/In-Big-Data-ETL-how-many-records-are-an-acceptable-loss -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira