Hi Don,

There is a new SQL config `spark.sql.hive.caseSensitiveInferenceMode` which
sets the action to take when a case-sensitive schema cannot be read from a
Hive table's properties.

The default setting of this config is `INFER_AND_SAVE` that goes toinfer the
case-sensitive schema from the underlying data files and write it back to
the table properties. From your description, I think you don't have the
right to write to the Hive table. So you see the warning log.

You can change the setting to `INFER_ONLY` which infers the schema but
doesn't try to write back to table properties, or `NEVER_INFER` which
fallbacks to using the case-insensitive metastore schema without inferring.


Don Drake wrote
> I'm in the process of migrating a few applications from Spark 2.1.1 to
> Spark 2.2.0 and so far the transition has been smooth.  One odd thing is
> that when I query a Hive table that I do not own, but have read access, I
> get a very long WARNING with a stack trace that basically says I do not
> have permission to ALTER the table.
> 
> As you can see, I'm just doing a SELECT on the table.   Everything works,
> but this stack trace is a little concerning.  Anyone know what is going
> on?
> 
> 
> I'm using a downloaded binary (spark-2.2.0-bin-hadoop2.6) on CDH 5.10.1.
> 
> Thanks.
> 
> -Don
> 
> -- 
> Donald Drake
> Drake Consulting
> http://www.drakeconsulting.com/
> https://twitter.com/dondrake <http://www.MailLaunder.com/>
> 800-733-2143
> 
> scal> spark.sql("select * from test.my_table")
> 17/09/01 15:40:30 WARN HiveExternalCatalog: Could not alter schema of
> table
>  `test`.`my_table` in a Hive compatible way. Updating Hive metastore in
> Spark SQL specific format.
> java.lang.reflect.InvocationTargetException
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>                 at java.lang.reflect.Method.invoke(Method.java:497)
>                 at
> org.apache.spark.sql.hive.client.Shim_v0_12.alterTable(HiveShim.scala:399)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$alterTable$1.apply$mcV$sp(HiveClientImpl.scala:461)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$alterTable$1.apply(HiveClientImpl.scala:457)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$alterTable$1.apply(HiveClientImpl.scala:457)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:290)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:231)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:230)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:273)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl.alterTable(HiveClientImpl.scala:457)
>                 at
> org.apache.spark.sql.hive.client.HiveClient$class.alterTable(HiveClient.scala:87)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl.alterTable(HiveClientImpl.scala:79)
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$alterTableSchema$1.apply$mcV$sp(HiveExternalCatalog.scala:636)
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$alterTableSchema$1.apply(HiveExternalCatalog.scala:627)
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$alterTableSchema$1.apply(HiveExternalCatalog.scala:627)
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog.alterTableSchema(HiveExternalCatalog.scala:627)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog.updateCatalogSchema(HiveMetastoreCatalog.scala:267)
>                 at org.apache.spark.sql.hive.HiveMetastoreCatalog.org
> $apache$spark$sql$hive$HiveMetastoreCatalog$$inferIfNeeded(HiveMetastoreCatalog.scala:251)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6$$anonfun$7.apply(HiveMetastoreCatalog.scala:195)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6$$anonfun$7.apply(HiveMetastoreCatalog.scala:194)
>                 at scala.Option.getOrElse(Option.scala:121)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6.apply(HiveMetastoreCatalog.scala:194)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6.apply(HiveMetastoreCatalog.scala:187)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog.withTableCreationLock(HiveMetastoreCatalog.scala:54)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog.convertToLogicalRelation(HiveMetastoreCatalog.scala:187)
>                 at org.apache.spark.sql.hive.RelationConversions.org
> $apache$spark$sql$hive$RelationConversions$$convert(HiveStrategies.scala:199)
>                 at
> org.apache.spark.sql.hive.RelationConversions$$anonfun$apply$4.applyOrElse(HiveStrategies.scala:219)
>                 at
> org.apache.spark.sql.hive.RelationConversions$$anonfun$apply$4.applyOrElse(HiveStrategies.scala:208)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
>                 at
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.hive.RelationConversions.apply(HiveStrategies.scala:208)
>                 at
> org.apache.spark.sql.hive.RelationConversions.apply(HiveStrategies.scala:184)
>                 at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
>                 at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
>                 at
> scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
>                 at
> scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
>                 at
> scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)
>                 at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
>                 at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
>                 at scala.collection.immutable.List.foreach(List.scala:381)
>                 at
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
>                 at
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
>                 at
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
>                 at
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
>                 at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:66)
>                 at
> org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
>                 at
> $line14.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.
> <init>
> (
> <console>
> :24)
>                 at
> $line14.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.
> <init>
> (
> <console>
> :29)
>                 at
> $line14.$read$$iw$$iw$$iw$$iw$$iw$$iw.
> <init>
> (
> <console>
> :31)
>                 at $line14.$read$$iw$$iw$$iw$$iw$$iw.
> <init>
> (
> <console>
> :33)
>                 at $line14.$read$$iw$$iw$$iw$$iw.
> <init>
> (
> <console>
> :35)
>                 at $line14.$read$$iw$$iw$$iw.
> <init>
> (
> <console>
> :37)
>                 at $line14.$read$$iw$$iw.
> <init>
> (
> <console>
> :39)
>                 at $line14.$read$$iw.
> <init>
> (
> <console>
> :41)
>                 at $line14.$read.
> <init>
> (
> <console>
> :43)
>                 at $line14.$read$.
> <init>
> (
> <console>
> :47)
>                 at $line14.$read$.
> <clinit>
> (
> <console>
> )
>                 at $line14.$eval$.$print$lzycompute(
> <console>
> :7)
>                 at $line14.$eval$.$print(
> <console>
> :6)
>                 at $line14.$eval.$print(
> <console>
> )
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>                 at java.lang.reflect.Method.invoke(Method.java:497)
>                 at
> scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
>                 at
> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
>                 at
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
>                 at
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
>                 at
> scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
>                 at
> scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
>                 at
> scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
>                 at
> scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
>                 at
> scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
>                 at
> scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
>                 at
> scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
>                 at
> scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
>                 at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415)
>                 at
> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:923)
>                 at
> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
>                 at
> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
>                 at
> scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
>                 at
> scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
>                 at org.apache.spark.repl.Main$.doMain(Main.scala:70)
>                 at org.apache.spark.repl.Main$.main(Main.scala:53)
>                 at org.apache.spark.repl.Main.main(Main.scala)
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>                 at java.lang.reflect.Method.invoke(Method.java:497)
>                 at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
>                 at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
>                 at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
>                 at
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
>                 at
> org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to
> alter table. User ddrak does not have privileges for ALTERTABLE_ADDCOLS
>                 at
> org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:498)
>                 at
> org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:484)
>                 ... 112 more
> Caused by: MetaException(message:User ddrak does not have privileges for
> ALTERTABLE_ADDCOLS)
>                 at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_with_cascade_result$alter_table_with_cascade_resultStandardScheme.read(ThriftHiveMetastore.java:40942)
>                 at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_with_cascade_result$alter_table_with_cascade_resultStandardScheme.read(ThriftHiveMetastore.java:40919)
>                 at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_with_cascade_result.read(ThriftHiveMetastore.java:40861)
>                 at
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
>                 at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_alter_table_with_cascade(ThriftHiveMetastore.java:1374)
>                 at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.alter_table_with_cascade(ThriftHiveMetastore.java:1358)
>                 at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:340)
>                 at
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table(SessionHiveMetaStoreClient.java:251)
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>                 at java.lang.reflect.Method.invoke(Method.java:497)
>                 at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
>                 at com.sun.proxy.$Proxy27.alter_table(Unknown Source)
>                 at
> org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:496)
>                 ... 113 more
> 17/09/01 15:40:30 WARN HiveMetastoreCatalog: Unable to save case-sensitive
> schema for table test.my_table
> org.apache.spark.sql.AnalysisException:
> org.apache.hadoop.hive.ql.metadata.HiveException: at least one column must
> be specified for the table;
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog.alterTableSchema(HiveExternalCatalog.scala:627)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog.updateCatalogSchema(HiveMetastoreCatalog.scala:267)
>                 at org.apache.spark.sql.hive.HiveMetastoreCatalog.org
> $apache$spark$sql$hive$HiveMetastoreCatalog$$inferIfNeeded(HiveMetastoreCatalog.scala:251)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6$$anonfun$7.apply(HiveMetastoreCatalog.scala:195)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6$$anonfun$7.apply(HiveMetastoreCatalog.scala:194)
>                 at scala.Option.getOrElse(Option.scala:121)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6.apply(HiveMetastoreCatalog.scala:194)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6.apply(HiveMetastoreCatalog.scala:187)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog.withTableCreationLock(HiveMetastoreCatalog.scala:54)
>                 at
> org.apache.spark.sql.hive.HiveMetastoreCatalog.convertToLogicalRelation(HiveMetastoreCatalog.scala:187)
>                 at org.apache.spark.sql.hive.RelationConversions.org
> $apache$spark$sql$hive$RelationConversions$$convert(HiveStrategies.scala:199)
>                 at
> org.apache.spark.sql.hive.RelationConversions$$anonfun$apply$4.applyOrElse(HiveStrategies.scala:219)
>                 at
> org.apache.spark.sql.hive.RelationConversions$$anonfun$apply$4.applyOrElse(HiveStrategies.scala:208)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
>                 at
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>                 at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
>                 at
> org.apache.spark.sql.hive.RelationConversions.apply(HiveStrategies.scala:208)
>                 at
> org.apache.spark.sql.hive.RelationConversions.apply(HiveStrategies.scala:184)
>                 at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
>                 at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
>                 at
> scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
>                 at
> scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
>                 at
> scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)
>                 at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
>                 at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
>                 at scala.collection.immutable.List.foreach(List.scala:381)
>                 at
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
>                 at
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
>                 at
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
>                 at
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
>                 at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:66)
>                 at
> org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
>                 at
> $line14.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.
> <init>
> (
> <console>
> :24)
>                 at
> $line14.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.
> <init>
> (
> <console>
> :29)
>                 at
> $line14.$read$$iw$$iw$$iw$$iw$$iw$$iw.
> <init>
> (
> <console>
> :31)
>                 at $line14.$read$$iw$$iw$$iw$$iw$$iw.
> <init>
> (
> <console>
> :33)
>                 at $line14.$read$$iw$$iw$$iw$$iw.
> <init>
> (
> <console>
> :35)
>                 at $line14.$read$$iw$$iw$$iw.
> <init>
> (
> <console>
> :37)
>                 at $line14.$read$$iw$$iw.
> <init>
> (
> <console>
> :39)
>                 at $line14.$read$$iw.
> <init>
> (
> <console>
> :41)
>                 at $line14.$read.
> <init>
> (
> <console>
> :43)
>                 at $line14.$read$.
> <init>
> (
> <console>
> :47)
>                 at $line14.$read$.
> <clinit>
> (
> <console>
> )
>                 at $line14.$eval$.$print$lzycompute(
> <console>
> :7)
>                 at $line14.$eval$.$print(
> <console>
> :6)
>                 at $line14.$eval.$print(
> <console>
> )
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>                 at java.lang.reflect.Method.invoke(Method.java:497)
>                 at
> scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
>                 at
> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
>                 at
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
>                 at
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
>                 at
> scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
>                 at
> scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
>                 at
> scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
>                 at
> scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
>                 at
> scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
>                 at
> scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
>                 at
> scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
>                 at
> scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
>                 at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415)
>                 at
> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:923)
>                 at
> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
>                 at
> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
>                 at
> scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
>                 at
> scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
>                 at org.apache.spark.repl.Main$.doMain(Main.scala:70)
>                 at org.apache.spark.repl.Main$.main(Main.scala:53)
>                 at org.apache.spark.repl.Main.main(Main.scala)
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>                 at java.lang.reflect.Method.invoke(Method.java:497)
>                 at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
>                 at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
>                 at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
>                 at
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
>                 at
> org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: at least one
> column must be specified for the table
>                 at
> org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:193)
>                 at
> org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:495)
>                 at
> org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:484)
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>                 at java.lang.reflect.Method.invoke(Method.java:497)
>                 at
> org.apache.spark.sql.hive.client.Shim_v0_12.alterTable(HiveShim.scala:399)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$alterTable$1.apply$mcV$sp(HiveClientImpl.scala:461)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$alterTable$1.apply(HiveClientImpl.scala:457)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$alterTable$1.apply(HiveClientImpl.scala:457)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:290)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:231)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:230)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:273)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl.alterTable(HiveClientImpl.scala:457)
>                 at
> org.apache.spark.sql.hive.client.HiveClient$class.alterTable(HiveClient.scala:87)
>                 at
> org.apache.spark.sql.hive.client.HiveClientImpl.alterTable(HiveClientImpl.scala:79)
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$alterTableSchema$1.apply$mcV$sp(HiveExternalCatalog.scala:643)
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$alterTableSchema$1.apply(HiveExternalCatalog.scala:627)
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$alterTableSchema$1.apply(HiveExternalCatalog.scala:627)
>                 at
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
>                 ... 93 more





-----
Liang-Chi Hsieh | @viirya 
Spark Technology Center 
http://www.spark.tc/ 
--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to