[
https://issues.apache.org/jira/browse/HIVE-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mousom Dhar Gupta updated HIVE-2907:
------------------------------------
Attachment: HIVE-2907.3.patch.txt
> Hive error when dropping a table with large number of partitions
> ----------------------------------------------------------------
>
> Key: HIVE-2907
> URL: https://issues.apache.org/jira/browse/HIVE-2907
> Project: Hive
> Issue Type: Bug
> Components: Metastore
> Affects Versions: 0.9.0
> Environment: General. Hive Metastore bug.
> Reporter: Mousom Dhar Gupta
> Assignee: Mousom Dhar Gupta
> Priority: Minor
> Fix For: 0.9.0
>
> Attachments: HIVE-2907.1.patch.txt, HIVE-2907.2.patch.txt,
> HIVE-2907.3.patch.txt, HIVE-2907.D2505.1.patch, HIVE-2907.D2505.2.patch,
> HIVE-2907.D2505.3.patch, HIVE-2907.D2505.4.patch, HIVE-2907.D2505.5.patch,
> HIVE-2907.D2505.6.patch
>
> Original Estimate: 10h
> Remaining Estimate: 10h
>
> Running into an "Out Of Memory" error when trying to drop a table with 128K
> partitions.
> The methods dropTable in
> metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
> and dropTable in ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
> encounter out of memory errors
> when dropping tables with lots of partitions because they try to load the
> metadata for every partition into memory.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira