[ https://issues.apache.org/jira/browse/SPARK-36594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17404916#comment-17404916 ]
Apache Spark commented on SPARK-36594: -------------------------------------- User 'c21' has created a pull request for this issue: https://github.com/apache/spark/pull/33843 > ORC vectorized reader should properly check maximal number of fields > -------------------------------------------------------------------- > > Key: SPARK-36594 > URL: https://issues.apache.org/jira/browse/SPARK-36594 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 3.2.0, 3.3.0 > Reporter: Cheng Su > Priority: Major > > Debugged internally and found a bug where we should disable vectorized reader > now based on schema recursively. Currently we check `schema.length` to be no > more than `wholeStageMaxNumFields` to enable vectorization. `schema.length` > does not take nested columns sub-fields into condition (i.e. view nested > column same as primitive column). This check will be wrong when enabling > vectorization for nested columns. We should follow same check from > `WholeStageCodegenExec` to check sub-fields recursively. This will not cause > correctness issue but will cause performance issue where we may enable > vectorization for nested columns by mistake when nested column has a lot of > sub-fields. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org