[ https://issues.apache.org/jira/browse/HIVE-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280304#comment-13280304 ]
Srinivas commented on HIVE-2907: -------------------------------- I downloaded the source code for Hive-0.9.1. However, it appears that "ObjectStore.java" is missing the fix that fetches partition-metadata in batches. So, it can still cause issues when trying to drop a table with a large number of partitions. Proposed fix in method dropTable from ObjectStore.java ========================================================== int partitionBatchSize = HiveConf.getIntVar(getConf(), ConfVars.METASTORE_BATCH_RETRIEVE_MAX); // call dropPartition on each of the table's partitions to follow the // procedure for cleanly dropping partitions. List<MPartition> partsToDelete = listMPartitions(dbName, tableName, partitionBatchSize); while (true){ if (partsToDelete != null || partsToDelete.isEmpty()) { break; } for (MPartition mpart : partsToDelete) { dropPartitionCommon(mpart); } } > Hive error when dropping a table with large number of partitions > ---------------------------------------------------------------- > > Key: HIVE-2907 > URL: https://issues.apache.org/jira/browse/HIVE-2907 > Project: Hive > Issue Type: Bug > Components: Metastore > Affects Versions: 0.9.0 > Environment: General. Hive Metastore bug. > Reporter: Mousom Dhar Gupta > Assignee: Mousom Dhar Gupta > Priority: Minor > Fix For: 0.9.0 > > Attachments: HIVE-2907.1.patch.txt, HIVE-2907.2.patch.txt, > HIVE-2907.3.patch.txt, HIVE-2907.D2505.1.patch, HIVE-2907.D2505.2.patch, > HIVE-2907.D2505.3.patch, HIVE-2907.D2505.4.patch, HIVE-2907.D2505.5.patch, > HIVE-2907.D2505.6.patch, HIVE-2907.D2505.7.patch > > Original Estimate: 10h > Remaining Estimate: 10h > > Running into an "Out Of Memory" error when trying to drop a table with 128K > partitions. > The methods dropTable in > metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java > and dropTable in ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java > encounter out of memory errors > when dropping tables with lots of partitions because they try to load the > metadata for every partition into memory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira