Alexander Alten-Lorenz created HIVE-3387:
--------------------------------------------

             Summary: meta data file size exceeds limit
                 Key: HIVE-3387
                 URL: https://issues.apache.org/jira/browse/HIVE-3387
             Project: Hive
          Issue Type: Bug
    Affects Versions: 0.7.1
            Reporter: Alexander Alten-Lorenz
             Fix For: 0.9.1


The cause is certainly that we use an array list instead of a set structure in 
the split locations API. Looks like a bug in Hive's CombineFileInputFormat.

Reproduce:
Set mapreduce.jobtracker.split.metainfo.maxsize=100000000 when submitting the 
Hive query. Run a big hive query that write data into a partitioned table. Due 
to the large number of splits, you encounter an exception on the job submitted 
to Hadoop and the exception said:

meta data size exceeds 100000000.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to