Github user lshmouse commented on the issue:

    https://github.com/apache/spark/pull/13706
  
    @lianhuiwang 
    
    Just a feedback.  With this patch, creating a MACRO throws the following 
exception.
    Any suggestion? I am trying to debug it.
    
    ```
    16/11/30 16:59:18 INFO execution.SparkSqlParser: Parsing command: CREATE 
TEMPORARY MACRO flr(time_ms bigint) FLOOR(time_ms/1000/3600)*3600
    16/11/30 16:59:18 ERROR thriftserver.SparkExecuteStatementOperation: Error 
executing query, currentState RUNNING, 
    org.apache.spark.sql.AnalysisException: Cannot resolve 
'(FLOOR(((boundreference() / 1000) / 3600)) * 3600)' for CREATE TEMPORARY MACRO 
flr, due to data type mismatch: differing types in '(FLOOR(((boundreference() / 
1000) / 3600)) * 3600)' (bigint and int).;
      at 
org.apache.spark.sql.execution.command.CreateMacroCommand.run(macros.scala:70)  
                
      at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:60)
      at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:58)
  
      at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
         
      at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:120)
          
      at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:120)
          
      at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:141)
        
      at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)  
                
      at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:138)      
                
      at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:119)  
                         
      at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
         
      at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)    
                
      at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186)                 
                         
      at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167)                 
                         
      at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)                 
                         
      at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)          
                         
      at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:682)              
                         
      at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:221)
      at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:165)
      at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:162)
      at java.security.AccessController.doPrivileged(Native Method)             
                         
      at javax.security.auth.Subject.doAs(Subject.java:415)                     
                         
      at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1854)
            
      at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:175)
      at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)         
                
      at java.util.concurrent.FutureTask.run(FutureTask.java:262)               
                         
      at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
                
      at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
                
      at java.lang.Thread.run(Thread.java:745)
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to