[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16868178#comment-16868178 ] Josh Rosen commented on SPARK-26555: Backported for 2.4.4 in https://github.com/apache/spark/commit/ba7f61e25d58aa379f94a23b03503a25574529bc > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Assignee: Martin Loncaric >Priority: Major > Fix For: 2.4.4, 3.0.0 > > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16834203#comment-16834203 ] Josh Rosen commented on SPARK-26555: I won't be able to tackle a backport for at least a week, so this is up for grabs in case someone else wants to do it. If I do end up working on this then I'll loop back here to claim it. > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Assignee: Martin Loncaric >Priority: Major > Fix For: 3.0.0 > > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16834055#comment-16834055 ] Sean Owen commented on SPARK-26555: --- I personally think it's OK to backport -- do you want to open a PR and go for it? > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Assignee: Martin Loncaric >Priority: Major > Fix For: 3.0.0 > > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16834025#comment-16834025 ] Josh Rosen commented on SPARK-26555: [~cloud_fan] [~srowen], could we backport this to the 2.4.x series? It'd be nice to have an LTS fix for users who can't immediately upgrade to 3.0. > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Assignee: Martin Loncaric >Priority: Major > Fix For: 3.0.0 > > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792240#comment-16792240 ] Martin Loncaric commented on SPARK-26555: - This is an existing issue with scala: https://github.com/scala/bug/issues/10766 > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792086#comment-16792086 ] Martin Loncaric commented on SPARK-26555: - Update: I have been able to replicate this without Spark at all, using snippets from org.apache.spark.sql.catalyst.ScalaReflection: https://stackoverflow.com/questions/55150590/thread-safety-in-scala-reflection-with-type-matching Investigating whether this can be fixed with different usage of the reflection library, or whether this is a scala issue. > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788773#comment-16788773 ] Martin Loncaric commented on SPARK-26555: - You can literally try any dataset and replicate this issue. For example, sparkSession.createDataset(Seq( MyClass(new Timestamp(1L), "b", "c", Some("d"), Some(1.0), Some(2.0)) )) I think the code I left is pretty clear - it fails sometimes. Run it once, and it may or may not work. I don't run multiple spark-submit's in parallel. > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788771#comment-16788771 ] Sean Owen commented on SPARK-26555: --- What is the fixed data set that reproduces this, to be clear? And you mean that if you run it once it works, but fails in parallel? > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782821#comment-16782821 ] Martin Loncaric commented on SPARK-26555: - Yes - when I take away any randomness and use the same dataset every time (say, with Some(something) for each optional value), I still get this issue. > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782806#comment-16782806 ] Sean Owen commented on SPARK-26555: --- To be clear, is there a data set that works only when not run in this concurrent loop? What I'm reading here is simply that you generate datasets sometimes that (correctly, I believe) can't have their schema inferred. > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782258#comment-16782258 ] Martin Loncaric commented on SPARK-26555: - I can also replicate with different schemas containing Option. When I remove all Option columns from the schema, the sporadic failure goes away. This also never happens when I remove the concurrency. > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782239#comment-16782239 ] Martin Loncaric commented on SPARK-26555: - I was able to replicate with both all rows as `Some()` and all rows as `None`. > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782185#comment-16782185 ] Martin Loncaric commented on SPARK-26555: - Will try it out and report back > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782179#comment-16782179 ] Sean Owen commented on SPARK-26555: --- Hm, I might have that backwards; might be when all are not None? at least, I have a strong suspicious it's to do with the data that gets generated in some runs. Maybe fix that at one data set and see if you can reproduce? > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782146#comment-16782146 ] Sean Owen commented on SPARK-26555: --- This doesn't look like a Spark bug. It comes up, I think, when your random data set has all "None" for a column. That's what the error indicates at least. That part of the code shouldn't have any shared state. Can you verify from your debug output? > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26555) Thread safety issue causes createDataset to fail with misleading errors
[ https://issues.apache.org/jira/browse/SPARK-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735447#comment-16735447 ] Hyukjin Kwon commented on SPARK-26555: -- Critical+ is usually reserved for committers. Please avoid to set this. > Thread safety issue causes createDataset to fail with misleading errors > --- > > Key: SPARK-26555 > URL: https://issues.apache.org/jira/browse/SPARK-26555 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.0 >Reporter: Martin Loncaric >Priority: Major > > This can be replicated (~2% of the time) with > {code:scala} > import java.sql.Timestamp > import java.util.concurrent.{Executors, Future} > import org.apache.spark.sql.SparkSession > import scala.collection.mutable.ListBuffer > import scala.concurrent.ExecutionContext > import scala.util.Random > object Main { > def main(args: Array[String]): Unit = { > val sparkSession = SparkSession.builder > .getOrCreate() > import sparkSession.implicits._ > val executor = Executors.newFixedThreadPool(1) > try { > implicit val xc: ExecutionContext = > ExecutionContext.fromExecutorService(executor) > val futures = new ListBuffer[Future[_]]() > for (i <- 1 to 3) { > futures += executor.submit(new Runnable { > override def run(): Unit = { > val d = if (Random.nextInt(2) == 0) Some("d value") else None > val e = if (Random.nextInt(2) == 0) Some(5.0) else None > val f = if (Random.nextInt(2) == 0) Some(6.0) else None > println("DEBUG", d, e, f) > sparkSession.createDataset(Seq( > MyClass(new Timestamp(1L), "b", "c", d, e, f) > )) > } > }) > } > futures.foreach(_.get()) > } finally { > println("SHUTDOWN") > executor.shutdown() > sparkSession.stop() > } > } > case class MyClass( > a: Timestamp, > b: String, > c: String, > d: Option[String], > e: Option[Double], > f: Option[Double] > ) > } > {code} > So it will usually come up during > {code:bash} > for i in $(seq 1 200); do > echo $i > spark-submit --master local[4] target/scala-2.11/spark-test_2.11-0.1.jar > done > {code} > causing a variety of possible errors, such as > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: scala.MatchError: scala.Option[String] (of class > scala.reflect.internal.Types$ClassArgsTypeRef) > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$deserializerFor$1.apply(ScalaReflection.scala:210){code} > or > {code}Exception in thread "main" java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > Caused by: java.lang.UnsupportedOperationException: Schema for type > scala.Option[scala.Double] is not supported > at > org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:789){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org