[ https://issues.apache.org/jira/browse/SPARK-18515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15682732#comment-15682732 ]
Dongjoon Hyun edited comment on SPARK-18515 at 11/21/16 7:10 AM: ----------------------------------------------------------------- ~While digging this, option 1 seems to increase complexity. As another option (Option 3), I want to propose converting arbitrary-type constant input literals into *string* literals at the beginning.~ ~I'll make a PR for this for the Option 3 review with all-atomic type test.~ Hmm, it's has a side-effect and also is different from Hive. Sorry. Never mind this. was (Author: dongjoon): ~~While digging this, option 1 seems to increase complexity. As another option (Option 3), I want to propose converting arbitrary-type constant input literals into *string* literals at the beginning.~~ ~~I'll make a PR for this for the Option 3 review with all-atomic type test.~~ Hmm, it's has a side-effect and also is different from Hive. Sorry. Never mind this. > AlterTableDropPartitions fails for non-string columns > ----------------------------------------------------- > > Key: SPARK-18515 > URL: https://issues.apache.org/jira/browse/SPARK-18515 > Project: Spark > Issue Type: Bug > Components: SQL > Reporter: Herman van Hovell > Assignee: Dongjoon Hyun > > AlterTableDropPartitions fails with a scala MatchError if you use non-string > partitioning columns: > {noformat} > spark.sql("drop table if exists tbl_x") > spark.sql("create table tbl_x (a int) partitioned by (p int)") > spark.sql("alter table tbl_x add partition (p=10)") > spark.sql("alter table tbl_x drop partition (p=10)") > {noformat} > Yields the following error: > {noformat} > scala.MatchError: (cast(p#8 as int) = 10) (of class > org.apache.spark.sql.catalyst.expressions.EqualTo) > at > org.apache.spark.sql.execution.command.AlterTableDropPartitionCommand$$anonfun$10$$anonfun$11.apply(ddl.scala:462) > at > org.apache.spark.sql.execution.command.AlterTableDropPartitionCommand$$anonfun$10$$anonfun$11.apply(ddl.scala:462) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:381) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:285) > at > org.apache.spark.sql.execution.command.AlterTableDropPartitionCommand$$anonfun$10.apply(ddl.scala:462) > at > org.apache.spark.sql.execution.command.AlterTableDropPartitionCommand$$anonfun$10.apply(ddl.scala:461) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at > org.apache.spark.sql.execution.command.AlterTableDropPartitionCommand.run(ddl.scala:461) > at > org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) > at > org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) > at > org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135) > at > org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) > at > org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132) > at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113) > at > org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87) > at > org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87) > at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185) > at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) > at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:591) > ... 39 elided > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org