[
https://issues.apache.org/jira/browse/HIVE-3387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13451761#comment-13451761
]
Hudson commented on HIVE-3387:
------------------------------
Integrated in Hive-trunk-h0.21 #1659 (See
[https://builds.apache.org/job/Hive-trunk-h0.21/1659/])
HIVE-3387. Meta data file size exceeds limit (Navis Ryu via cws) (Revision
1382600)
Result = FAILURE
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1382600
Files :
*
/hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
*
/hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
> meta data file size exceeds limit
> ---------------------------------
>
> Key: HIVE-3387
> URL: https://issues.apache.org/jira/browse/HIVE-3387
> Project: Hive
> Issue Type: Bug
> Affects Versions: 0.7.1
> Reporter: Alexander Alten-Lorenz
> Assignee: Navis
> Fix For: 0.10.0
>
> Attachments: HIVE-3387.1.patch.txt
>
>
> The cause is certainly that we use an array list instead of a set structure
> in the split locations API. Looks like a bug in Hive's CombineFileInputFormat.
> Reproduce:
> Set mapreduce.jobtracker.split.metainfo.maxsize=100000000 when submitting the
> Hive query. Run a big hive query that write data into a partitioned table.
> Due to the large number of splits, you encounter an exception on the job
> submitted to Hadoop and the exception said:
> meta data size exceeds 100000000.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira