I'll just go ahead an answer my own question here. In my hive script I had
a ghost "set hive.optimize.ppd=false;"  That, I believe, caused my
metastore operations to be far in excess of what I was trying to do. Things
are working now. I just didn't want to ask a question, answer it (due to my
own error) and not have other be able to learn from my mistakes. Sorry
group!

John




On Sat, Aug 10, 2013 at 8:18 AM, John Omernik <j...@omernik.com> wrote:

> I am doing some testing going from table_rc to table_orc . The
> table/partition structure is the same, and there is a two level partition
> day= then source=
>
> I am doing a single day (including all 10 or so sources in the day). This
> worked just fine in one environment, but now, I am getting strange errors
> before things even get to the data (or so it seams). The error seems to be
> related to the metastore, but on the table_rc table rather than the orc.  I
> guess I am curious why this error is happening. This is on the mapr
> distribution of Hadoop. But it seems more related to metastore operations
> than hadoop filesystem operations.
>
>
> INSERT OVERWRITE TABLE table_orc PARTITION (day, source)
> SELECT * from table_rc where day = '2013-08-04'
> 13/08/10 08:44:41 INFO parse.ParseDriver: Parse Completed
> 13/08/10 08:44:41 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 13/08/10 08:44:41 INFO parse.SemanticAnalyzer: Completed phase 1 of
> Semantic Analysis
> 13/08/10 08:44:41 INFO parse.SemanticAnalyzer: Get metadata for source
> tables
> 13/08/10 08:44:41 INFO metastore.HiveMetaStore: 0: get_table :
> db=hive_flow tbl=pcaps
> 13/08/10 08:44:41 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr 
> cmd=get_table
> : db=hive_flow tbl=table_rc
> 13/08/10 08:44:41 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 13/08/10 08:44:41 INFO parse.SemanticAnalyzer: Get metadata for
> destination tables
> 13/08/10 08:44:41 INFO metastore.HiveMetaStore: 0: get_table :
> db=hive_flow tbl=table_orc
> 13/08/10 08:44:41 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr 
> cmd=get_table
> : db=hive_flow tbl=table_orc
> 13/08/10 08:44:41 INFO parse.SemanticAnalyzer: Completed getting MetaData
> in Semantic Analysis
> 13/08/10 08:44:41 INFO metastore.HiveMetaStore: 0:
> get_partitions_with_auth : db=hive_flow tbl=table_rc
> 13/08/10 08:44:41 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr 
> cmd=get_partitions_with_auth
> : db=hive_flow tbl=table_rc
> 13/08/10 08:52:12 ERROR metastore.RetryingHMSHandler:
> java.lang.OutOfMemoryError: Java heap space
> at com.mysql.jdbc.MysqlIO.buildResultSetWithRows(MysqlIO.java:2098)
>  at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:459)
> at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2076)
>  at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1451)
> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1787)
>  at com.mysql.jdbc.Connection.execSQL(Connection.java:3283)
> at
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1332)
>  at
> com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1467)
> at
> org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
>  at
> org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
> at
> org.datanucleus.store.rdbms.SQLController.executeStatementQuery(SQLController.java:457)
>  at
> org.datanucleus.store.rdbms.scostore.RDBMSMapEntrySetStore.iterator(RDBMSMapEntrySetStore.java:248)
> at
> org.datanucleus.sco.SCOUtils.populateMapDelegateWithStoreData(SCOUtils.java:987)
>  at org.datanucleus.sco.backed.Map.loadFromStore(Map.java:258)
> at org.datanucleus.sco.backed.Map.keySet(Map.java:509)
>  at
> org.datanucleus.store.fieldmanager.LoadFieldManager.internalFetchObjectField(LoadFieldManager.java:118)
> at
> org.datanucleus.store.fieldmanager.AbstractFetchFieldManager.fetchObjectField(AbstractFetchFieldManager.java:114)
>  at
> org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183)
> at
> org.apache.hadoop.hive.metastore.model.MSerDeInfo.jdoReplaceField(MSerDeInfo.java)
>  at
> org.apache.hadoop.hive.metastore.model.MSerDeInfo.jdoReplaceFields(MSerDeInfo.java)
> at
> org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860)
>  at
> org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879)
> at
> org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java:1647)
>  at
> org.datanucleus.ObjectManagerImpl.performDetachAllOnTxnEndPreparation(ObjectManagerImpl.java:3552)
> at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java:3291)
>  at
> org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java:369)
> at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:256)
>  at org.datanucleus.jdo.JDOTransaction.commit(JDOTransaction.java:83)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:345)
>  at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsWithAuth(ObjectStore.java:1405)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
> 13/08/10 08:52:12 INFO ql.Driver: </PERFLOG method=compile
> start=1376138681224 end=1376139132222 duration=450998>
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at com.mysql.jdbc.MysqlIO.buildResultSetWithRows(MysqlIO.java:2098)
>  at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:459)
> at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2076)
>  at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1451)
> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1787)
>  at com.mysql.jdbc.Connection.execSQL(Connection.java:3283)
> at
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1332)
>  at
> com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1467)
> at
> org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
>  at
> org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
> at
> org.datanucleus.store.rdbms.SQLController.executeStatementQuery(SQLController.java:457)
>  at
> org.datanucleus.store.rdbms.scostore.RDBMSMapEntrySetStore.iterator(RDBMSMapEntrySetStore.java:248)
> at
> org.datanucleus.sco.SCOUtils.populateMapDelegateWithStoreData(SCOUtils.java:987)
>  at org.datanucleus.sco.backed.Map.loadFromStore(Map.java:258)
> at org.datanucleus.sco.backed.Map.keySet(Map.java:509)
>  at
> org.datanucleus.store.fieldmanager.LoadFieldManager.internalFetchObjectField(LoadFieldManager.java:118)
> at
> org.datanucleus.store.fieldmanager.AbstractFetchFieldManager.fetchObjectField(AbstractFetchFieldManager.java:114)
>  at
> org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183)
> at
> org.apache.hadoop.hive.metastore.model.MSerDeInfo.jdoReplaceField(MSerDeInfo.java)
>  at
> org.apache.hadoop.hive.metastore.model.MSerDeInfo.jdoReplaceFields(MSerDeInfo.java)
> at
> org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860)
>  at
> org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879)
> at
> org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java:1647)
>  at
> org.datanucleus.ObjectManagerImpl.performDetachAllOnTxnEndPreparation(ObjectManagerImpl.java:3552)
> at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java:3291)
>  at
> org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java:369)
> at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:256)
>  at org.datanucleus.jdo.JDOTransaction.commit(JDOTransaction.java:83)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:345)
>  at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsWithAuth(ObjectStore.java:1405)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>

Reply via email to