[
https://issues.apache.org/jira/browse/HIVE-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12802448#action_12802448
]
Zheng Shao commented on HIVE-1032:
----------------------------------
Another error that we might want to include in the same patch.
The solution for this error is: "Data file split
hdfs://dfs:9000/user/hive/warehouse/mytable/ds=2009-10-04/part-00232, range:
0-0 is corrupted".
{code}
2010-01-19 11:53:30,581 INFO org.apache.hadoop.mapred.MapTask: split:
hdfs://dfs:9000/user/hive/warehouse/mytable/ds=2009-10-04/part-00232, range: 0-0
2010-01-19 11:53:30,795 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:180)
at java.io.DataInputStream.readFully(DataInputStream.java:152)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1450)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1428)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1417)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1412)
at
org.apache.hadoop.mapred.SequenceFileRecordReader.<init>(SequenceFileRecordReader.java:43)
at
org.apache.hadoop.mapred.SequenceFileInputFormat.getRecordReader(SequenceFileInputFormat.java:63)
at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:236)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:338)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:159)
2010-01-19 11:53:30,801 INFO org.apache.hadoop.mapred.Task: Runnning cleanup
for the task
{code}
> Better Error Messages for Execution Errors
> ------------------------------------------
>
> Key: HIVE-1032
> URL: https://issues.apache.org/jira/browse/HIVE-1032
> Project: Hadoop Hive
> Issue Type: New Feature
> Components: Query Processor
> Reporter: Paul Yang
> Assignee: Paul Yang
> Attachments: HIVE-1032.1.patch, HIVE-1032.2.patch, HIVE-1032.3.patch
>
>
> Three common errors that occur during execution are:
> 1. Map-side group-by causing an out of memory exception due to large
> aggregation hash tables
> 2. ScriptOperator failing due to the user's script throwing an exception or
> otherwise returning a non-zero error code
> 3. Incorrectly specifying the join order of small and large tables, causing
> the large table to be loaded into memory and producing an out of memory
> exception.
> These errors are typically discovered by manually examining the error log
> files of the failed task. This task proposes to create a feature that would
> automatically read the error logs and output a probable cause and solution to
> the command line.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.