It works,thanks for your great help.

On Mon, Mar 30, 2015 at 10:07 PM, Denny Lee <denny.g....@gmail.com> wrote:

> Hi Vincent,
>
> This may be a case that you're missing a semi-colon after your CREATE
> TEMPORARY TABLE statement.  I ran your original statement (missing the
> semi-colon) and got the same error as you did.  As soon as I added it in, I
> was good to go again:
>
> CREATE TEMPORARY TABLE jsonTable
> USING org.apache.spark.sql.json
> OPTIONS (
>   path "/samples/people.json"
> );
> -- above needed a semi-colon so the temporary table could be created first
> SELECT * FROM jsonTable;
>
> HTH!
> Denny
>
>
> On Sun, Mar 29, 2015 at 6:59 AM Vincent He <vincent.he.andr...@gmail.com>
> wrote:
>
>> No luck, it does not work, anyone know whether there some special setting
>> for spark-sql cli so we do not need to write code to use spark sql? Anyone
>> have some simple example on this? appreciate any help. thanks in advance.
>>
>> On Sat, Mar 28, 2015 at 9:05 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>>
>>> See
>>> https://databricks.com/blog/2015/03/24/spark-sql-graduates-from-alpha-in-spark-1-3.html
>>>
>>> I haven't tried the SQL statements in above blog myself.
>>>
>>> Cheers
>>>
>>> On Sat, Mar 28, 2015 at 5:39 AM, Vincent He <
>>> vincent.he.andr...@gmail.com> wrote:
>>>
>>>> thanks for your information . I have read it, I can run sample with
>>>> scala or python, but for spark-sql shell, I can not get an exmaple running
>>>> successfully, can you give me an example I can run with "./bin/spark-sql"
>>>> without writing any code? thanks
>>>>
>>>> On Sat, Mar 28, 2015 at 7:35 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>>>>
>>>>> Please take a look at
>>>>> https://spark.apache.org/docs/latest/sql-programming-guide.html
>>>>>
>>>>> Cheers
>>>>>
>>>>>
>>>>>
>>>>> > On Mar 28, 2015, at 5:08 AM, Vincent He <
>>>>> vincent.he.andr...@gmail.com> wrote:
>>>>> >
>>>>> >
>>>>> > I am learning spark sql and try spark-sql example,  I running
>>>>> following code, but I got exception "ERROR CliDriver:
>>>>> org.apache.spark.sql.AnalysisException: cannot recognize input near
>>>>> 'CREATE' 'TEMPORARY' 'TABLE' in ddl statement; line 1 pos 17", I have two
>>>>> questions,
>>>>> > 1. Do we have a list of the statement supported in spark-sql ?
>>>>> > 2. Does spark-sql shell support hiveql ? If yes, how to set?
>>>>> >
>>>>> > The example I tried:
>>>>> > CREATE TEMPORARY TABLE jsonTable
>>>>> > USING org.apache.spark.sql.json
>>>>> > OPTIONS (
>>>>> >   path "examples/src/main/resources/people.json"
>>>>> > )
>>>>> > SELECT * FROM jsonTable
>>>>> > The exception I got,
>>>>> >         > CREATE TEMPORARY TABLE jsonTable
>>>>> >          > USING org.apache.spark.sql.json
>>>>> >          > OPTIONS (
>>>>> >          >   path "examples/src/main/resources/people.json"
>>>>> >          > )
>>>>> >          > SELECT * FROM jsonTable
>>>>> >          > ;
>>>>> > 15/03/28 17:38:34 INFO ParseDriver: Parsing command: CREATE
>>>>> TEMPORARY TABLE jsonTable
>>>>> > USING org.apache.spark.sql.json
>>>>> > OPTIONS (
>>>>> >   path "examples/src/main/resources/people.json"
>>>>> > )
>>>>> > SELECT * FROM jsonTable
>>>>> > NoViableAltException(241@[654:1: ddlStatement : (
>>>>> createDatabaseStatement | switchDatabaseStatement | dropDatabaseStatement 
>>>>> |
>>>>> createTableStatement | dropTableStatement | truncateTableStatement |
>>>>> alterStatement | descStatement | showStatement | metastoreCheck |
>>>>> createViewStatement | dropViewStatement | createFunctionStatement |
>>>>> createMacroStatement | createIndexStatement | dropIndexStatement |
>>>>> dropFunctionStatement | dropMacroStatement | analyzeStatement |
>>>>> lockStatement | unlockStatement | lockDatabase | unlockDatabase |
>>>>> createRoleStatement | dropRoleStatement | grantPrivileges |
>>>>> revokePrivileges | showGrants | showRoleGrants | showRolePrincipals |
>>>>> showRoles | grantRole | revokeRole | setRole | showCurrentRole );])
>>>>> >             at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
>>>>> >             at org.antlr.runtime.DFA.predict(DFA.java:144)
>>>>> >             at
>>>>> org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2090)
>>>>> >             at
>>>>> org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1398)
>>>>> >             at
>>>>> org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1036)
>>>>> >             at
>>>>> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
>>>>> >             at
>>>>> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:227)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:241)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>>>> >             at
>>>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>>>> >             at
>>>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
>>>>> >             at
>>>>> org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:96)
>>>>> >             at
>>>>> org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:95)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>>>> >             at
>>>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>>>> >             at
>>>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:234)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext$$anonfun$sql$1.apply(HiveContext.scala:92)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext$$anonfun$sql$1.apply(HiveContext.scala:92)
>>>>> >             at scala.Option.getOrElse(Option.scala:120)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:92)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:275)
>>>>> >             at
>>>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:211)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>>>>> >             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>> Method)
>>>>> >             at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>> >             at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> >             at java.lang.reflect.Method.invoke(Method.java:606)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>>> > 15/03/28 17:38:34 ERROR SparkSQLDriver: Failed in [CREATE TEMPORARY
>>>>> TABLE jsonTable
>>>>> > USING org.apache.spark.sql.json
>>>>> > OPTIONS (
>>>>> >   path "examples/src/main/resources/people.json"
>>>>> > )
>>>>> > SELECT * FROM jsonTable
>>>>> > ]
>>>>> > org.apache.spark.sql.AnalysisException: cannot recognize input near
>>>>> 'CREATE' 'TEMPORARY' 'TABLE' in ddl statement; line 1 pos 17
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:254)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>>>> >             at
>>>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>>>> >             at
>>>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
>>>>> >             at
>>>>> org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:96)
>>>>> >             at
>>>>> org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:95)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>>>> >             at
>>>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>>>> >             at
>>>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:234)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext$$anonfun$sql$1.apply(HiveContext.scala:92)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext$$anonfun$sql$1.apply(HiveContext.scala:92)
>>>>> >             at scala.Option.getOrElse(Option.scala:120)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:92)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:275)
>>>>> >             at
>>>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:211)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>>>>> >             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>> Method)
>>>>> >             at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>> >             at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> >             at java.lang.reflect.Method.invoke(Method.java:606)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>>> > org.apache.spark.sql.AnalysisException: cannot recognize input near
>>>>> 'CREATE' 'TEMPORARY' 'TABLE' in ddl statement; line 1 pos 17
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:254)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>>>> >             at
>>>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>>>> >             at
>>>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
>>>>> >             at
>>>>> org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:96)
>>>>> >             at
>>>>> org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:95)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>>>> >             at
>>>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>>>> >             at
>>>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:234)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext$$anonfun$sql$1.apply(HiveContext.scala:92)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext$$anonfun$sql$1.apply(HiveContext.scala:92)
>>>>> >             at scala.Option.getOrElse(Option.scala:120)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:92)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:275)
>>>>> >             at
>>>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:211)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>>>>> >             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>> Method)
>>>>> >             at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>> >             at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> >             at java.lang.reflect.Method.invoke(Method.java:606)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>>> >
>>>>> > 15/03/28 17:38:34 ERROR CliDriver:
>>>>> org.apache.spark.sql.AnalysisException: cannot recognize input near
>>>>> 'CREATE' 'TEMPORARY' 'TABLE' in ddl statement; line 1 pos 17
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:254)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>>>> >             at
>>>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>>>> >             at
>>>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
>>>>> >             at
>>>>> org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:96)
>>>>> >             at
>>>>> org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:95)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >            at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>>> >             at
>>>>> scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>>>> >             at
>>>>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>>>> >             at
>>>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>>>> >             at
>>>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:234)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext$$anonfun$sql$1.apply(HiveContext.scala:92)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext$$anonfun$sql$1.apply(HiveContext.scala:92)
>>>>> >             at scala.Option.getOrElse(Option.scala:120)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:92)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:275)
>>>>> >             at
>>>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:211)
>>>>> >             at
>>>>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>>>>> >             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>> Method)
>>>>> >             at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>> >             at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> >             at java.lang.reflect.Method.invoke(Method.java:606)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
>>>>> >             at
>>>>> org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>>> >
>>>>> > spark-sql>
>>>>> >
>>>>> >
>>>>> > Vincent (Dashuang) He
>>>>> >
>>>>>
>>>>
>>>>
>>>
>>

Reply via email to