[ https://issues.apache.org/jira/browse/HIVE-7458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14068522#comment-14068522 ]
Damien Carol commented on HIVE-7458: ------------------------------------ Your are doing {{hiveContext.hql("DROP TABLE IF EXISTS hivetesting")}} in Scala schell of the Spark project. What this shell is doing ? Query to remote metastore on non existing table (see on your provided stack). The remote metastore throws {{NoSuchObjectException(message:default.hivetesting table not found)}} because Spark code call {{HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:854)}} on non-existing table. It's the right behavior. You should check on Spark code why a query is done on non existing table. I think Spark does not handle well the {{IF EXISTS}} part of this query. Maybe you could fill a ticket on Spark JIRA. BUT, it's *not a bug* in HIVE IMHO. > Drop Hive Table If Exists Throw out Error by Spark > -------------------------------------------------- > > Key: HIVE-7458 > URL: https://issues.apache.org/jira/browse/HIVE-7458 > Project: Hive > Issue Type: Bug > Components: Spark > Environment: Spark(1.0.1), Hive(0.13.1) > Reporter: Haimei Li > > I have Hive, MySQL and Spark. MySQL is Hive metastore_db. I follow this guide > to configurate it > (http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Installation-Guide/cdh4ig_topic_18_4.html). > It is ok for me to do drop table command under hive shell environment. But > when I enter into spark-shell env, I use hiveContext.hql("DROP TABLE IF > EXISTS hivetesting"). And then, I get following error: > ERROR Hive: NoSuchObjectException(message:default.hivetesting table not found) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_result$get_table_resultStandardScheme.read(ThriftHiveMetastore.java:27129) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_result$get_table_resultStandardScheme.read(ThriftHiveMetastore.java:27097) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_result.read(ThriftHiveMetastore.java:27028) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table(ThriftHiveMetastore.java:936) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table(ThriftHiveMetastore.java:922) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:854) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89) > at com.sun.proxy.$Proxy11.getTable(Unknown Source) > at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:950) > ...... > I think when running under hive shell, it mutes this error. But under spark > shell, hive doesn't mute this error and throw out error directly. So If the > table doesn't exist, hive should not throw out error; it should throw out > warning. -- This message was sent by Atlassian JIRA (v6.2#6252)