[ 
https://issues.apache.org/jira/browse/SPARK-18964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15769073#comment-15769073
 ] 

Suhas Nalapure commented on SPARK-18964:
----------------------------------------

Right, the assumption is "In addition to the basic SQLContext, you can also 
create a HiveContext, which provides a superset of the functionality provided 
by the basic SQLContext" quoted from the Spark documentation here 
https://spark.apache.org/docs/1.6.0/sql-programming-guide.html. I stand 
corrected if I got it wrong.

In my project, the issue is we're having to use HiveContext because the 
SQLContext doesn't support Window functions (and there is no other reason) as 
of Spark 1.6.0 and as a side effect of that we can't use the Time Interval 
Literals which I find quite handy.

> HiveContext does not support Time Interval Literals
> ---------------------------------------------------
>
>                 Key: SPARK-18964
>                 URL: https://issues.apache.org/jira/browse/SPARK-18964
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.5.0, 1.6.0
>            Reporter: Suhas Nalapure
>
> HiveContext does not recognize the Time Interval Literals mentioned here 
> https://databricks.com/blog/2015/09/16/apache-spark-1-5-dataframe-api-highlights.html.
> E.g. The following Spark sql runs just fine when a SQLContext is used but 
> fails when HiveContext is used
>  
> select *, case when `Order_Date` + INTERVAL 7 DAY > `Ship_Date` then "On 
> Time" else "Late" end as Shipment_On_Time from sales;
> Logs:
> ------
> org.apache.spark.sql.AnalysisException: cannot recognize input near 
> 'INTERVAL' '7' 'DAY' in expression specification; line 2 pos 30
>       at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:318)
>       at 
> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>       at 
> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>       at 
> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>       at 
> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>       at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:295)
>       at 
> org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66)
>       at 
> org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:279)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:226)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:225)
>       at 
> org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:268)
>       at org.apache.spark.sql.hive.HiveQLDialect.parse(HiveContext.scala:65)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
>       at 
> org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114)
>       at 
> org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:113)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>       at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>       at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>       at 
> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>       at 
> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>       at 
> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
>       at 
> org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
>       at 
> org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43)
>       at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231)
>       at org.apache.spark.sql.hive.HiveContext.parseSql(HiveContext.scala:331)
>       at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
>       at 
> com.dataken.script.compilation.CompilationContext.registerQueryDF(CompilationContext.java:238)
>       at 
> com.dataken.script.compilation.CompilationContext.createOrAlterCol(CompilationContext.java:286)
>       at 
> com.dataken.script.antlr.listener.DatakenTLCompiler.exitVColAssignment(DatakenTLCompiler.java:136)
>       at 
> com.dataken.script.antlr.generated.DatakenTLParser$VColAssignmentContext.exitRule(DatakenTLParser.java:408)
>       at 
> org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:71)
>       at 
> org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:54)
>       at 
> org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:52)
>       at 
> com.dataken.script.antlr.listener.DatakenTLCompiler.compile(DatakenTLCompiler.java:171)
>       at com.dataken.script.Script.compile(Script.java:226)
>       at com.dataken.script.Script.execute(Script.java:273)
>       at com.dataken.script.ScriptTest.lambda$0(ScriptTest.java:116)
>       at java.util.ArrayList.forEach(ArrayList.java:1249)
>       at com.dataken.script.ScriptTest.testSample(ScriptTest.java:115)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>       at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>       at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>       at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>       at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>       at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>       at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>       at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>       at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>       at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>       at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>       at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>       at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>       at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>       at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>       at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
>       at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>       at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
>       at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:678)
>       at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
>       at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
> 2016-12-21 16:33:52,920 DEBUG [main] com.dataken.script.ScriptTest



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to