[
https://issues.apache.org/jira/browse/SPARK-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970652#comment-15970652
]
Umesh Chaudhary commented on SPARK-20299:
-----------------------------------------
My bad, previously I was indeed trying the reproduce this on Spark 2.1. I was
able to reproduce the issue by following the mentioned steps.
After debugging the behaviour I observed a difference in generated
"CleanExpressions" as below :
{code}
=== Result of Batch CleanExpressions (Spark 2.1.0) ===
!staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType,
fromString, assertnotnull(input[0, scala.Tuple2, true], top level Product input
object)._1, true) AS _1#0
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType,
fromString, assertnotnull(input[0, scala.Tuple2, true], top level Product input
object)._1, true)
!+- staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType,
fromString, assertnotnull(input[0, scala.Tuple2, true], top level Product input
object)._1, true)
+- assertnotnull(input[0, scala.Tuple2, true], top level Product input
object)._1
! +- assertnotnull(input[0, scala.Tuple2, true], top level Product input
object)._1
+- assertnotnull(input[0, scala.Tuple2, true], top level Product input object)
! +- assertnotnull(input[0, scala.Tuple2, true], top level Product input
object)
+- input[0, scala.Tuple2, true]
! +- input[0, scala.Tuple2, true]
{code}
{code}
=== Result of Batch CleanExpressions (Spark 2.2.0)===
!staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType,
fromString, assertnotnull(assertnotnull(input[0, scala.Tuple2, true]))._1,
true) AS _1#0
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType,
fromString, assertnotnull(assertnotnull(input[0, scala.Tuple2, true]))._1, true)
!+- staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType,
fromString, assertnotnull(assertnotnull(input[0, scala.Tuple2, true]))._1,
true)
+- assertnotnull(assertnotnull(input[0, scala.Tuple2, true]))._1
! +- assertnotnull(assertnotnull(input[0, scala.Tuple2, true]))._1
+- assertnotnull(assertnotnull(input[0, scala.Tuple2, true]))
! +- assertnotnull(assertnotnull(input[0, scala.Tuple2, true]))
+- assertnotnull(input[0, scala.Tuple2, true])
! +- assertnotnull(input[0, scala.Tuple2, true])
+- input[0, scala.Tuple2, true]
! +- input[0, scala.Tuple2, true]
{code}
There is an additional wrapper of "assertnotnull" function on the tuple rows
which seems to be resulted by changes in CodeGenerator. Need to confirm with
[~marmbrus] as this seems to be root cause of this issue.
> NullPointerException when null and string are in a tuple while encoding
> Dataset
> -------------------------------------------------------------------------------
>
> Key: SPARK-20299
> URL: https://issues.apache.org/jira/browse/SPARK-20299
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.2.0
> Reporter: Jacek Laskowski
> Priority: Minor
>
> When creating a Dataset from a tuple with {{null}} and a string, NPE is
> reported. When either is removed, it works fine.
> {code}
> scala> Seq((1, null.asInstanceOf[Int]), (2, 1)).toDS
> res43: org.apache.spark.sql.Dataset[(Int, Int)] = [_1: int, _2: int]
> scala> Seq(("1", null.asInstanceOf[Int]), ("2", 1)).toDS
> java.lang.RuntimeException: Error while encoding:
> java.lang.NullPointerException
> staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType,
> fromString, assertnotnull(assertnotnull(input[0, scala.Tuple2, true], top
> level Product input object), - root class: "scala.Tuple2")._1, true) AS _1#474
> assertnotnull(assertnotnull(input[0, scala.Tuple2, true], top level Product
> input object), - root class: "scala.Tuple2")._2 AS _2#475
> at
> org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:290)
> at
> org.apache.spark.sql.SparkSession$$anonfun$2.apply(SparkSession.scala:454)
> at
> org.apache.spark.sql.SparkSession$$anonfun$2.apply(SparkSession.scala:454)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.immutable.List.foreach(List.scala:381)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.immutable.List.map(List.scala:285)
> at org.apache.spark.sql.SparkSession.createDataset(SparkSession.scala:454)
> at org.apache.spark.sql.SQLContext.createDataset(SQLContext.scala:377)
> at
> org.apache.spark.sql.SQLImplicits.localSeqToDatasetHolder(SQLImplicits.scala:246)
> ... 48 elided
> Caused by: java.lang.NullPointerException
> at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_1$(Unknown
> Source)
> at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
> Source)
> at
> org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:287)
> ... 58 more
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]