[ 
https://issues.apache.org/jira/browse/SPARK-42745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Toth updated SPARK-42745:
-------------------------------
    Description: 
After SPARK-40086 / SPARK-42049 the following, simple subselect expression 
containing query:
{noformat}
select (select sum(id) from t1)
{noformat}
fails with:

{noformat}
09:48:57.645 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in 
stage 3.0 (TID 3)
java.lang.NullPointerException
        at 
org.apache.spark.sql.execution.datasources.v2.BatchScanExec.batch$lzycompute(BatchScanExec.scala:47)
        at 
org.apache.spark.sql.execution.datasources.v2.BatchScanExec.batch(BatchScanExec.scala:47)
        at 
org.apache.spark.sql.execution.datasources.v2.BatchScanExec.hashCode(BatchScanExec.scala:60)
        at scala.runtime.Statics.anyHash(Statics.java:122)
        ...
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.hashCode(TreeNode.scala:249)
        at scala.runtime.Statics.anyHash(Statics.java:122)
        at 
scala.collection.mutable.HashTable$HashUtils.elemHashCode(HashTable.scala:416)
        at 
scala.collection.mutable.HashTable$HashUtils.elemHashCode$(HashTable.scala:416)
        at scala.collection.mutable.HashMap.elemHashCode(HashMap.scala:44)
        at scala.collection.mutable.HashTable.addEntry(HashTable.scala:149)
        at scala.collection.mutable.HashTable.addEntry$(HashTable.scala:148)
        at scala.collection.mutable.HashMap.addEntry(HashMap.scala:44)
        at scala.collection.mutable.HashTable.init(HashTable.scala:110)
        at scala.collection.mutable.HashTable.init$(HashTable.scala:89)
        at scala.collection.mutable.HashMap.init(HashMap.scala:44)
        at scala.collection.mutable.HashMap.readObject(HashMap.scala:195)
        ...
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
        at 
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:87)
        at 
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:129)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:85)
        at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
        at org.apache.spark.scheduler.Task.run(Task.scala:139)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1520)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
{noformat}
when DSv2 is enabled.

> Improved AliasAwareOutputExpression works with DSv2
> ---------------------------------------------------
>
>                 Key: SPARK-42745
>                 URL: https://issues.apache.org/jira/browse/SPARK-42745
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.4.0, 3.5.0
>            Reporter: Peter Toth
>            Assignee: Peter Toth
>            Priority: Major
>             Fix For: 3.4.0
>
>
> After SPARK-40086 / SPARK-42049 the following, simple subselect expression 
> containing query:
> {noformat}
> select (select sum(id) from t1)
> {noformat}
> fails with:
> {noformat}
> 09:48:57.645 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 
> in stage 3.0 (TID 3)
> java.lang.NullPointerException
>       at 
> org.apache.spark.sql.execution.datasources.v2.BatchScanExec.batch$lzycompute(BatchScanExec.scala:47)
>       at 
> org.apache.spark.sql.execution.datasources.v2.BatchScanExec.batch(BatchScanExec.scala:47)
>       at 
> org.apache.spark.sql.execution.datasources.v2.BatchScanExec.hashCode(BatchScanExec.scala:60)
>       at scala.runtime.Statics.anyHash(Statics.java:122)
>         ...
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode.hashCode(TreeNode.scala:249)
>       at scala.runtime.Statics.anyHash(Statics.java:122)
>       at 
> scala.collection.mutable.HashTable$HashUtils.elemHashCode(HashTable.scala:416)
>       at 
> scala.collection.mutable.HashTable$HashUtils.elemHashCode$(HashTable.scala:416)
>       at scala.collection.mutable.HashMap.elemHashCode(HashMap.scala:44)
>       at scala.collection.mutable.HashTable.addEntry(HashTable.scala:149)
>       at scala.collection.mutable.HashTable.addEntry$(HashTable.scala:148)
>       at scala.collection.mutable.HashMap.addEntry(HashMap.scala:44)
>       at scala.collection.mutable.HashTable.init(HashTable.scala:110)
>       at scala.collection.mutable.HashTable.init$(HashTable.scala:89)
>       at scala.collection.mutable.HashMap.init(HashMap.scala:44)
>       at scala.collection.mutable.HashMap.readObject(HashMap.scala:195)
>         ...
>       at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
>       at 
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:87)
>       at 
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:129)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:85)
>       at 
> org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
>       at org.apache.spark.scheduler.Task.run(Task.scala:139)
>       at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
>       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1520)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:750)
> {noformat}
> when DSv2 is enabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to