[ https://issues.apache.org/jira/browse/PIG-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Cheolsoo Park updated PIG-3059: ------------------------------- Attachment: avro_test_files-2.tar.gz PIG-3059-2.patch I am uploading a new patch that includes the following changes: * The error rate is printed as part of job stats. (See the last two lines.) {code} Counters: Total records written : 1 Total bytes written : 10 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Total input splits processed : 2 Total bad input splits : 1 ( 50.0% ) {code} To do this, I made changes to the {{PigStats}} interface. Please let me know if this is not recommended. * The error message is improved. Now the location of the bad split that causes the run-time exception is printed. {code} Backend error : error while reading input split: hdfs://cheolsoo-mr1-0.ent.cloudera.com:8020/user/cheolsoo/bad.avro:0+81 {code} * As discussed, {{InputErrorTracker}} counts the number of splits instead of records. * Since the {{ignore_bad_files}} option in {{AvroStorage}} is already in branch-0.11, I decided to not delete it for backward compatibility. When the {{ignore_bad_files}} option is enabled, it is equivalent to setting {{pig.load.bad.split.threshold}} to 1.0. * Lastly, I am uploading a new tarball for new test cases. To run them, the following commands should be executed: {code} tar -xf test_avro_files-2.tar.gz svn rm contrib/piggybank/java/src/test/java/org/apache/pig/piggybank/test/storage/avro/avro_test_files/test_corrupted_file.avro svn add contrib/piggybank/java/src/test/java/org/apache/pig/piggybank/test/storage/avro/avro_test_files/test_corrupted_file svn rm contrib/piggybank/java/src/test/java/org/apache/pig/piggybank/test/storage/avro/avro_test_files/expected_testCorruptedFile.avro svn add contrib/piggybank/java/src/test/java/org/apache/pig/piggybank/test/storage/avro/avro_test_files/expected_testCorruptedFile2.avro svn add contrib/piggybank/java/src/test/java/org/apache/pig/piggybank/test/storage/avro/avro_test_files/expected_testCorruptedFile3.avro svn add contrib/piggybank/java/src/test/java/org/apache/pig/piggybank/test/storage/avro/avro_test_files/expected_testCorruptedFile4.avro {code} Thanks! > Global configurable minimum 'bad record' thresholds > --------------------------------------------------- > > Key: PIG-3059 > URL: https://issues.apache.org/jira/browse/PIG-3059 > Project: Pig > Issue Type: New Feature > Components: impl > Affects Versions: 0.11 > Reporter: Russell Jurney > Assignee: Cheolsoo Park > Fix For: 0.12 > > Attachments: avro_test_files-2.tar.gz, PIG-3059-2.patch, > PIG-3059.patch, test_avro_files.tar.gz > > > See PIG-2614. > Pig dies when one record in a LOAD of a billion records fails to parse. This > is almost certainly not the desired behavior. elephant-bird and some other > storage UDFs have minimum thresholds in terms of percent and count that must > be exceeded before a job will fail outright. > We need these limits to be configurable for Pig, globally. I've come to > realize what a major problem Pig's crashing on bad records is for new Pig > users. I believe this feature can greatly improve Pig. > An example of a config would look like: > pig.storage.bad.record.threshold=0.01 > pig.storage.bad.record.min=100 > A thorough discussion of this issue is available here: > http://www.quora.com/Big-Data/In-Big-Data-ETL-how-many-records-are-an-acceptable-loss -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira