[ https://issues.apache.org/jira/browse/SPARK-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15580989#comment-15580989 ]
Low Chin Wei commented on SPARK-13747: -------------------------------------- java.lang.IllegalArgumentException: spark.sql.execution.id is already set at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:81) ~[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2546) ~[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2192) ~[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2199) ~[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2227) ~[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2226) ~[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.Dataset.withCallback(Dataset.scala:2559) ~[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.Dataset.count(Dataset.scala:2226) ~[spark-sql_2.11-2.0.1.jar:2.0.1] > Concurrent execution in SQL doesn't work with Scala ForkJoinPool > ---------------------------------------------------------------- > > Key: SPARK-13747 > URL: https://issues.apache.org/jira/browse/SPARK-13747 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.0.0 > Reporter: Shixiong Zhu > Assignee: Andrew Or > Fix For: 2.0.0 > > > Run the following codes may fail > {code} > (1 to 100).par.foreach { _ => > println(sc.parallelize(1 to 5).map { i => (i, i) }.toDF("a", "b").count()) > } > java.lang.IllegalArgumentException: spark.sql.execution.id is already set > at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:87) > > at > org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904) > at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1385) > {code} > This is because SparkContext.runJob can be suspended when using a > ForkJoinPool (e.g.,scala.concurrent.ExecutionContext.Implicits.global) as it > calls Await.ready (introduced by https://github.com/apache/spark/pull/9264). > So when SparkContext.runJob is suspended, ForkJoinPool will run another task > in the same thread, however, the local properties has been polluted. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org