[ https://issues.apache.org/jira/browse/SPARK-38182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
XiDuo You updated SPARK-38182: ------------------------------ Description: reproduce: {code:java} CREATE TABLE t (c1 int) USING PARQUET; SET spark.sql.optimizer.excludedRules=org.apache.spark.sql.catalyst.optimizer.BooleanSimplification; SELECT * FROM t WHERE c1 = 1 AND 2 > 1; {code} and the error msg: {code:java} java.util.NoSuchElementException: next on empty iterator at scala.collection.Iterator$$anon$2.next(Iterator.scala:41) at scala.collection.Iterator$$anon$2.next(Iterator.scala:39) at scala.collection.mutable.LinkedHashSet$$anon$1.next(LinkedHashSet.scala:89) at scala.collection.IterableLike.head(IterableLike.scala:109) at scala.collection.IterableLike.head$(IterableLike.scala:108) at org.apache.spark.sql.catalyst.expressions.AttributeSet.head(AttributeSet.scala:69) at org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.$anonfun$listFiles$3(PartitioningAwareFileIndex.scala:85) at scala.Option.map(Option.scala:230) at org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.listFiles(PartitioningAwareFileIndex.scala:84) at org.apache.spark.sql.execution.FileSourceScanExec.selectedPartitions$lzycompute(DataSourceScanExec.scala:249) {code} was: reproduce: {code:java} CREATE TABLE pt (c1 int) USING PARQUET PARTITIONED BY (p string); set spark.sql.optimizer.excludedRules=org.apache.spark.sql.catalyst.optimizer.BooleanSimplification; SELECT * FROM pt WHERE p = 'a' AND 2 > 1; {code} and the error msg: {code:java} java.util.NoSuchElementException: next on empty iterator at scala.collection.Iterator$$anon$2.next(Iterator.scala:41) at scala.collection.Iterator$$anon$2.next(Iterator.scala:39) at scala.collection.mutable.LinkedHashSet$$anon$1.next(LinkedHashSet.scala:89) at scala.collection.IterableLike.head(IterableLike.scala:109) at scala.collection.IterableLike.head$(IterableLike.scala:108) at org.apache.spark.sql.catalyst.expressions.AttributeSet.head(AttributeSet.scala:69) at org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.$anonfun$listFiles$3(PartitioningAwareFileIndex.scala:85) at scala.Option.map(Option.scala:230) at org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.listFiles(PartitioningAwareFileIndex.scala:84) at org.apache.spark.sql.execution.FileSourceScanExec.selectedPartitions$lzycompute(DataSourceScanExec.scala:249) {code} > Fix NoSuchElementException if pushed filter does not contain any references > --------------------------------------------------------------------------- > > Key: SPARK-38182 > URL: https://issues.apache.org/jira/browse/SPARK-38182 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 3.3.0 > Reporter: XiDuo You > Priority: Major > > reproduce: > {code:java} > CREATE TABLE t (c1 int) USING PARQUET; > SET > spark.sql.optimizer.excludedRules=org.apache.spark.sql.catalyst.optimizer.BooleanSimplification; > SELECT * FROM t WHERE c1 = 1 AND 2 > 1; > {code} > and the error msg: > {code:java} > java.util.NoSuchElementException: next on empty iterator > at scala.collection.Iterator$$anon$2.next(Iterator.scala:41) > at scala.collection.Iterator$$anon$2.next(Iterator.scala:39) > at > scala.collection.mutable.LinkedHashSet$$anon$1.next(LinkedHashSet.scala:89) > at scala.collection.IterableLike.head(IterableLike.scala:109) > at scala.collection.IterableLike.head$(IterableLike.scala:108) > at > org.apache.spark.sql.catalyst.expressions.AttributeSet.head(AttributeSet.scala:69) > at > org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.$anonfun$listFiles$3(PartitioningAwareFileIndex.scala:85) > at scala.Option.map(Option.scala:230) > at > org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.listFiles(PartitioningAwareFileIndex.scala:84) > at > org.apache.spark.sql.execution.FileSourceScanExec.selectedPartitions$lzycompute(DataSourceScanExec.scala:249) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org