Hi Marcelo,
Exactly. Found it few minutes ago.

I ran mysql hive 12 sql on my hive 10 metastore, which created missing
tables and it seems to be working now.

Not sure if everything else in CDH 4.6/Hive 10 would also still be working
though or not.

Looks like we cannot use Spark SQL in a clean way with CDH4 unless we
upgrade to CDH5.


Thanks for your response!

Thanks,
Anurag Tangri


On Wed, Oct 15, 2014 at 12:02 PM, Marcelo Vanzin <van...@cloudera.com>
wrote:

> Hi Anurag,
>
> Spark SQL (from the Spark standard distribution / sources) currently
> requires Hive 0.12; as you mention, CDH4 has Hive 0.10, so that's not
> gonna work.
>
> CDH 5.2 ships with Spark 1.1.0 and is modified so that Spark SQL can
> talk to the Hive 0.13.1 that is also bundled with CDH, so if that's an
> option for you, you could try it out.
>
>
> On Wed, Oct 15, 2014 at 11:23 AM, Anurag Tangri <atan...@groupon.com>
> wrote:
> > I see Hive 0.10.0 metastore sql does not have a VERSION table but spark
> is
> > looking for it.
> >
> > Anyone else faced this issue or any ideas on how to fix it ?
> >
> >
> > Thanks,
> > Anurag Tangri
> >
> >
> >
> > On Wed, Oct 15, 2014 at 10:51 AM, Anurag Tangri <atan...@groupon.com>
> wrote:
> >>
> >> Hi,
> >> I compiled spark 1.1.0 with CDH 4.6 but when I try to get spark-sql cli
> >> up, it gives error:
> >>
> >>
> >> ==============
> >>
> >> [atangri@pit-uat-hdputil1 bin]$ ./spark-sql
> >> Spark assembly has been built with Hive, including Datanucleus jars on
> >> classpath
> >> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> >> MaxPermSize=128m; support was removed in 8.0
> >> log4j:WARN No appenders could be found for logger
> >> (org.apache.hadoop.conf.Configuration).
> >> log4j:WARN Please initialize the log4j system properly.
> >> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
> for
> >> more info.
> >> Unable to initialize logging using hive-log4j.properties, not found on
> >> CLASSPATH!
> >> Using Spark's default log4j profile:
> >> org/apache/spark/log4j-defaults.properties
> >> 14/10/15 17:45:17 INFO SecurityManager: Changing view acls to: atangri,
> >> 14/10/15 17:45:17 INFO SecurityManager: Changing modify acls to:
> atangri,
> >> 14/10/15 17:45:17 INFO SecurityManager: SecurityManager: authentication
> >> disabled; ui acls disabled; users with view permissions: Set(atangri, );
> >> users with modify permissions: Set(atangri, )
> >> 14/10/15 17:45:17 INFO Slf4jLogger: Slf4jLogger started
> >> 14/10/15 17:45:17 INFO Remoting: Starting remoting
> >> 14/10/15 17:45:17 INFO Remoting: Remoting started; listening on
> addresses
> >> :[akka.tcp://sparkDriver@pit-uat-hdputil1.snc1:54506]
> >> 14/10/15 17:45:17 INFO Remoting: Remoting now listens on addresses:
> >> [akka.tcp://sparkDriver@pit-uat-hdputil1.snc1:54506]
> >> 14/10/15 17:45:17 INFO Utils: Successfully started service 'sparkDriver'
> >> on port 54506.
> >> 14/10/15 17:45:17 INFO SparkEnv: Registering MapOutputTracker
> >> 14/10/15 17:45:17 INFO SparkEnv: Registering BlockManagerMaster
> >> 14/10/15 17:45:17 INFO DiskBlockManager: Created local directory at
> >> /tmp/spark-local-20141015174517-bdfa
> >> 14/10/15 17:45:17 INFO Utils: Successfully started service 'Connection
> >> manager for block manager' on port 58400.
> >> 14/10/15 17:45:17 INFO ConnectionManager: Bound socket to port 58400
> with
> >> id = ConnectionManagerId(pit-uat-hdputil1.snc1,58400)
> >> 14/10/15 17:45:17 INFO MemoryStore: MemoryStore started with capacity
> >> 265.1 MB
> >> 14/10/15 17:45:17 INFO BlockManagerMaster: Trying to register
> BlockManager
> >> 14/10/15 17:45:17 INFO BlockManagerMasterActor: Registering block
> manager
> >> pit-uat-hdputil1.snc1:58400 with 265.1 MB RAM
> >> 14/10/15 17:45:17 INFO BlockManagerMaster: Registered BlockManager
> >> 14/10/15 17:45:17 INFO HttpFileServer: HTTP File server directory is
> >> /tmp/spark-c7f28004-6189-424f-a214-379d5dcc72b7
> >> 14/10/15 17:45:17 INFO HttpServer: Starting HTTP Server
> >> 14/10/15 17:45:17 INFO Utils: Successfully started service 'HTTP file
> >> server' on port 33666.
> >> 14/10/15 17:45:18 INFO Utils: Successfully started service 'SparkUI' on
> >> port 4040.
> >> 14/10/15 17:45:18 INFO SparkUI: Started SparkUI at
> >> http://pit-uat-hdputil1.snc1:4040
> >> 14/10/15 17:45:18 INFO AkkaUtils: Connecting to HeartbeatReceiver:
> >> akka.tcp://sparkDriver@pit-uat-hdputil1.snc1
> :54506/user/HeartbeatReceiver
> >> spark-sql> show tables;
> >> 14/10/15 17:45:22 INFO ParseDriver: Parsing command: show tables
> >> 14/10/15 17:45:22 INFO ParseDriver: Parse Completed
> >> 14/10/15 17:45:23 INFO Driver: <PERFLOG method=Driver.run>
> >> 14/10/15 17:45:23 INFO Driver: <PERFLOG method=TimeToSubmit>
> >> 14/10/15 17:45:23 INFO Driver: <PERFLOG method=compile>
> >> 14/10/15 17:45:23 INFO Driver: <PERFLOG method=parse>
> >> 14/10/15 17:45:23 INFO ParseDriver: Parsing command: show tables
> >> 14/10/15 17:45:23 INFO ParseDriver: Parse Completed
> >> 14/10/15 17:45:23 INFO Driver: </PERFLOG method=parse
> start=1413395123538
> >> end=1413395123539 duration=1>
> >> 14/10/15 17:45:23 INFO Driver: <PERFLOG method=semanticAnalyze>
> >> 14/10/15 17:45:23 INFO Driver: Semantic Analysis Completed
> >> 14/10/15 17:45:23 INFO Driver: </PERFLOG method=semanticAnalyze
> >> start=1413395123539 end=1413395123641 duration=102>
> >> 14/10/15 17:45:23 INFO ListSinkOperator: Initializing Self 0 OP
> >> 14/10/15 17:45:23 INFO ListSinkOperator: Operator 0 OP initialized
> >> 14/10/15 17:45:23 INFO ListSinkOperator: Initialization Done 0 OP
> >> 14/10/15 17:45:23 INFO Driver: Returning Hive schema:
> >> Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string,
> comment:from
> >> deserializer)], properties:null)
> >> 14/10/15 17:45:23 INFO Driver: </PERFLOG method=compile
> >> start=1413395123517 end=1413395123696 duration=179>
> >> 14/10/15 17:45:23 INFO Driver: <PERFLOG method=Driver.execute>
> >> 14/10/15 17:45:23 INFO Driver: Starting command: show tables
> >> 14/10/15 17:45:23 INFO Driver: </PERFLOG method=TimeToSubmit
> >> start=1413395123517 end=1413395123698 duration=181>
> >> 14/10/15 17:45:23 INFO Driver: <PERFLOG method=runTasks>
> >> 14/10/15 17:45:23 INFO Driver: <PERFLOG method=task.DDL.Stage-0>
> >> 14/10/15 17:45:23 INFO HiveMetaStore: 0: Opening raw store with
> >> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> >> 14/10/15 17:45:23 INFO ObjectStore: ObjectStore, initialize called
> >> 14/10/15 17:45:23 INFO Persistence: Property datanucleus.cache.level2
> >> unknown - will be ignored
> >> 14/10/15 17:45:24 WARN BoneCPConfig: Max Connections < 1. Setting to 20
> >> 14/10/15 17:45:24 INFO ObjectStore: Setting MetaStore object pin classes
> >> with
> >>
> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> >> 14/10/15 17:45:24 INFO ObjectStore: Initialized ObjectStore
> >> 14/10/15 17:45:24 WARN Query: Query for candidates of
> >> org.apache.hadoop.hive.metastore.model.MVersionTable and subclasses
> resulted
> >> in no possible candidates
> >> Required table missing : "`VERSION`" in Catalog "" Schema "".
> DataNucleus
> >> requires this table to perform its persistence operations. Either your
> >> MetaData is incorrect, or you need to enable
> "datanucleus.autoCreateTables"
> >> org.datanucleus.store.rdbms.exceptions.MissingTableException: Required
> >> table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus
> requires
> >> this table to perform its persistence operations. Either your MetaData
> is
> >> incorrect, or you need to enable "datanucleus.autoCreateTables"
> >>     at
> >>
> org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:485)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3387)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3199)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2850)
> >>     at
> >>
> org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1624)
> >>     at
> >>
> org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:953)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:687)
> >>     at
> >>
> org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:409)
> >>     at
> >>
> org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:903)
> >>     at
> >>
> org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:356)
> >>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1744)
> >>     at
> org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
> >>     at org.datanucleus.store.query.Query.execute(Query.java:1654)
> >>     at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:5693)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.getMetaStoreSchemaVersion(ObjectStore.java:5675)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:5634)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:5622)
> >>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>     at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>     at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>     at java.lang.reflect.Method.invoke(Method.java:483)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
> >>     at com.sun.proxy.$Proxy13.verifySchema(Unknown Source)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:403)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:441)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:326)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:286)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4060)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:121)
> >>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >> Method)
> >>     at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >>     at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>     at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1210)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
> >>     at
> >>
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2372)
> >>     at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2383)
> >>     at
> org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1139)
> >>     at
> >> org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1128)
> >>     at
> >> org.apache.hadoop.hive.ql.exec.DDLTask.showTables(DDLTask.java:2236)
> >>     at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:333)
> >>     at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
> >>     at
> >>
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
> >>     at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
> >>     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
> >>     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
> >>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
> >>     at
> >> org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:298)
> >>     at
> >> org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:272)
> >>     at
> >>
> org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult$lzycompute(NativeCommand.scala:35)
> >>     at
> >>
> org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult(NativeCommand.scala:35)
> >>     at
> >>
> org.apache.spark.sql.hive.execution.NativeCommand.execute(NativeCommand.scala:38)
> >>     at
> >>
> org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)
> >>     at
> >>
> org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)
> >>     at
> >> org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)
> >>     at org.apache.spark.sql.SchemaRDD.<init>(SchemaRDD.scala:103)
> >>     at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
> >>     at
> >> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> >>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>     at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>     at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>     at java.lang.reflect.Method.invoke(Method.java:483)
> >>     at
> org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
> >>     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
> >>     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> >> 14/10/15 17:45:24 WARN ObjectStore: Version information not found in
> >> metastore. hive.metastore.schema.verification is not enabled so
> recording
> >> the schema version 0.12.0
> >> 14/10/15 17:45:24 WARN Query: Query for candidates of
> >> org.apache.hadoop.hive.metastore.model.MVersionTable and subclasses
> resulted
> >> in no possible candidates
> >> Required table missing : "`VERSION`" in Catalog "" Schema "".
> DataNucleus
> >> requires this table to perform its persistence operations. Either your
> >> MetaData is incorrect, or you need to enable
> "datanucleus.autoCreateTables"
> >> org.datanucleus.store.rdbms.exceptions.MissingTableException: Required
> >> table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus
> requires
> >> this table to perform its persistence operations. Either your MetaData
> is
> >> incorrect, or you need to enable "datanucleus.autoCreateTables"
> >>     at
> >>
> org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:485)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3387)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3199)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2850)
> >>     at
> >>
> org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1624)
> >>     at
> >>
> org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:953)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:687)
> >>     at
> >>
> org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:409)
> >>     at
> >>
> org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:903)
> >>     at
> >>
> org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:356)
> >>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1744)
> >>     at
> org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
> >>     at org.datanucleus.store.query.Query.execute(Query.java:1654)
> >>     at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:5693)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:5724)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:5644)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:5622)
> >>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>     at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>     at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>     at java.lang.reflect.Method.invoke(Method.java:483)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
> >>     at com.sun.proxy.$Proxy13.verifySchema(Unknown Source)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:403)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:441)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:326)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:286)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4060)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:121)
> >>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >> Method)
> >>     at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >>     at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>     at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1210)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
> >>     at
> >>
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2372)
> >>     at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2383)
> >>     at
> org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1139)
> >>     at
> >> org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1128)
> >>     at
> >> org.apache.hadoop.hive.ql.exec.DDLTask.showTables(DDLTask.java:2236)
> >>     at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:333)
> >>     at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
> >>     at
> >>
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
> >>     at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
> >>     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
> >>     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
> >>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
> >>     at
> >> org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:298)
> >>     at
> >> org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:272)
> >>     at
> >>
> org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult$lzycompute(NativeCommand.scala:35)
> >>     at
> >>
> org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult(NativeCommand.scala:35)
> >>     at
> >>
> org.apache.spark.sql.hive.execution.NativeCommand.execute(NativeCommand.scala:38)
> >>     at
> >>
> org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)
> >>     at
> >>
> org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)
> >>     at
> >> org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)
> >>     at org.apache.spark.sql.SchemaRDD.<init>(SchemaRDD.scala:103)
> >>     at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
> >>     at
> >> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> >>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>     at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>     at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>     at java.lang.reflect.Method.invoke(Method.java:483)
> >>     at
> org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
> >>     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
> >>     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> >> 14/10/15 17:45:25 INFO JDO: Exception thrown
> >> Required table missing : "`VERSION`" in Catalog "" Schema "".
> DataNucleus
> >> requires this table to perform its persistence operations. Either your
> >> MetaData is incorrect, or you need to enable
> "datanucleus.autoCreateTables"
> >> org.datanucleus.store.rdbms.exceptions.MissingTableException: Required
> >> table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus
> requires
> >> this table to perform its persistence operations. Either your MetaData
> is
> >> incorrect, or you need to enable "datanucleus.autoCreateTables"
> >>     at
> >>
> org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:485)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3387)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3199)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2850)
> >>     at
> >>
> org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1624)
> >>     at
> >>
> org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:953)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:687)
> >>     at
> >>
> org.datanucleus.store.rdbms.RDBMSStoreManager.getPropertiesForGenerator(RDBMSStoreManager.java:2059)
> >>     at
> >>
> org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1364)
> >>     at
> >>
> org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3727)
> >>     at
> >>
> org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:2574)
> >>     at
> >>
> org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:526)
> >>     at
> >>
> org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:202)
> >>     at
> >>
> org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:1326)
> >>     at
> >>
> org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2123)
> >>     at
> >>
> org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:1972)
> >>     at
> >>
> org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1820)
> >>     at
> >>
> org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217)
> >>     at
> >>
> org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727)
> >>     at
> >>
> org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:5734)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:5644)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:5622)
> >>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>     at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>     at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>     at java.lang.reflect.Method.invoke(Method.java:483)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
> >>     at com.sun.proxy.$Proxy13.verifySchema(Unknown Source)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:403)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:441)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:326)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:286)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4060)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:121)
> >>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >> Method)
> >>     at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >>     at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>     at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1210)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
> >>     at
> >>
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
> >>     at
> >>
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2372)
> >>     at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2383)
> >>     at
> org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1139)
> >>     at
> >> org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1128)
> >>     at
> >> org.apache.hadoop.hive.ql.exec.DDLTask.showTables(DDLTask.java:2236)
> >>     at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:333)
> >>     at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
> >>     at
> >>
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
> >>     at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
> >>     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
> >>     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
> >>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
> >>     at
> >> org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:298)
> >>     at
> >> org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:272)
> >>     at
> >>
> org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult$lzycompute(NativeCommand.scala:35)
> >>     at
> >>
> org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult(NativeCommand.scala:35)
> >>     at
> >>
> org.apache.spark.sql.hive.execution.NativeCommand.execute(NativeCommand.scala:38)
> >>     at
> >>
> org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)
> >>     at
> >>
> org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)
> >>     at
> >> org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)
> >>     at org.apache.spark.sql.SchemaRDD.<init>(SchemaRDD.scala:103)
> >>     at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
> >>     at
> >> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
> >>     at
> >>
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> >>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>     at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>     at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>     at java.lang.reflect.Method.invoke(Method.java:483)
> >>     at
> org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
> >>     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
> >>     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> >> 14/10/15 17:45:25 ERROR RetryingRawStore: JDO datastore error. Retrying
> >> metastore command after 1000 ms (attempt 1 of 1)
> >> 14/10/15 17:45:26 WARN Query: Query for candidates of
> >> org.apache.hadoop.hive.metastore.model.MVersionTable and subclasses
> resulted
> >> in no possible candidates
> >> Required table missing : "`VERSION`" in Catalog "" Schema "".
> DataNucleus
> >> requires this table to perform its persistence operations. Either your
> >> MetaData is incorrect, or you need to enable
> "datanucleus.autoCreateTables"
> >> org.datanucleus.store.rdbms.exceptions.MissingTableException: Required
> >> table missing : "`VERSION`" in
> >>
> >>
> >> ================
> >>
> >>
> >> can somebody tell what am I missing ?
> >>
> >>
> >> Same works via hive shell.
> >>
> >>
> >> Thanks,
> >> Anurag Tangri
> >>
> >
>
>
>
> --
> Marcelo
>

Reply via email to