[ 
https://issues.apache.org/jira/browse/SPARK-15224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15279386#comment-15279386
 ] 

poseidon commented on SPARK-15224:
----------------------------------

well,It's very obvious, the exception say that it's not a valid syntax. But, in 
origin hive sql, it's valid, and works well. 
After we add jar to thrift server , every sql will depend on this jar , and 
every executor will add this dependency when executor start. 
if we can not delete jar, and know how many jars we have load. Thrif-sever will 
be a very fat server after running for a while. 


> Can not delete jar and list jar in spark Thrift server
> ------------------------------------------------------
>
>                 Key: SPARK-15224
>                 URL: https://issues.apache.org/jira/browse/SPARK-15224
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.1
>         Environment: spark 1.6.1
> hive 1.2.1 
> hdfs 2.7.1 
>            Reporter: poseidon
>            Priority: Minor
>
> when you try to delete jar , and exec delete jar xxxx or list jar in you 
> beeline client. it throws exception
> delete jar; 
> Error: org.apache.spark.sql.AnalysisException: line 1:7 missing FROM at 
> 'jars' near 'jars'
> line 1:12 missing EOF at 'myudfs' near 'jars'; (state=,code=0)
> list jar;
> Error: org.apache.spark.sql.AnalysisException: cannot recognize input near 
> 'list' 'jars' '<EOF>'; line 1 pos 0 (state=,code=0)
> {code:title=funnlog.log|borderStyle=solid}
> 16/05/09 17:26:52 INFO thriftserver.SparkExecuteStatementOperation: Running 
> query 'list jar' with 1da09765-efb4-42dc-8890-3defca40f89d
> 16/05/09 17:26:52 INFO parse.ParseDriver: Parsing command: list jar
> NoViableAltException(26@[])
>       at 
> org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1071)
>       at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202)
>       at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
>       at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:276)
>       at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303)
>       at 
> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>       at 
> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>       at 
> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>       at 
> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>       at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:295)
>       at 
> org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66)
>       at 
> org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:293)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:240)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:239)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:282)
>       at org.apache.spark.sql.hive.HiveQLDialect.parse(HiveContext.scala:65)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
>       at 
> org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114)
>       at 
> org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:113)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>       at 
> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>       at 
> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
>       at 
> org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43)
>       at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231)
>       at org.apache.spark.sql.hive.HiveContext.parseSql(HiveContext.scala:331)
>       at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:211)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:154)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:151)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:164)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> 16/05/09 17:26:52 ERROR thriftserver.SparkExecuteStatementOperation: Error 
> executing query, currentState RUNNING, 
> org.apache.spark.sql.AnalysisException: cannot recognize input near 'list' 
> 'jar' '<EOF>'; line 1 pos 0
>       at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:318)
>       at 
> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>       at 
> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>       at 
> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>       at 
> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>       at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:295)
>       at 
> org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66)
>       at 
> org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:293)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:240)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:239)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:282)
>       at org.apache.spark.sql.hive.HiveQLDialect.parse(HiveContext.scala:65)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
>       at 
> org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114)
>       at 
> org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:113)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>       at 
> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>       at 
> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
>       at 
> org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43)
>       at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231)
>       at org.apache.spark.sql.hive.HiveContext.parseSql(HiveContext.scala:331)
>       at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:211)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:154)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:151)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:164)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> 16/05/09 17:26:52 ERROR thriftserver.SparkExecuteStatementOperation: Error 
> running hive query: 
> org.apache.hive.service.cli.HiveSQLException: 
> org.apache.spark.sql.AnalysisException: cannot recognize input near 'list' 
> 'jar' '<EOF>'; line 1 pos 0
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:246)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:154)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:151)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:164)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> {code}
> {code:title=delete jar fulllog.java|borderStyle=solid}
> 16/05/09 17:28:03 INFO thriftserver.SparkExecuteStatementOperation: Running 
> query 'delete jar' with 156e31de-c312-49de-9ac9-bedea86744f2
> 16/05/09 17:28:03 INFO parse.ParseDriver: Parsing command: delete jar
> 16/05/09 17:28:03 ERROR thriftserver.SparkExecuteStatementOperation: Error 
> executing query, currentState RUNNING, 
> org.apache.spark.sql.AnalysisException: missing FROM at 'jar' near '<EOF>'; 
> line 1 pos 7
>       at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:318)
>       at 
> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>       at 
> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>       at 
> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>       at 
> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>       at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:295)
>       at 
> org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66)
>       at 
> org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:293)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:240)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:239)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:282)
>       at org.apache.spark.sql.hive.HiveQLDialect.parse(HiveContext.scala:65)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
>       at 
> org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114)
>       at 
> org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:113)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>       at 
> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>       at 
> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
>       at 
> org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43)
>       at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231)
>       at org.apache.spark.sql.hive.HiveContext.parseSql(HiveContext.scala:331)
>       at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:211)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:154)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:151)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:164)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> 16/05/09 17:28:03 ERROR thriftserver.SparkExecuteStatementOperation: Error 
> running hive query: 
> org.apache.hive.service.cli.HiveSQLException: 
> org.apache.spark.sql.AnalysisException: missing FROM at 'jar' near '<EOF>'; 
> line 1 pos 7
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:246)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:154)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:151)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:164)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to