[ https://issues.apache.org/jira/browse/SPARK-35817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365764#comment-17365764 ]
Bruce Robbins commented on SPARK-35817: --------------------------------------- [~xkrogen] Thanks! {quote}I guess we should create a map of one sides' fields (with lowercased names for case-insensitivity) {quote} That's what I did locally to test it out: a map (keyed by lowercased names) with sequences as the values (since the lower case name could map to multiple mixed case fields). > Queries against wide Avro tables can be slow > -------------------------------------------- > > Key: SPARK-35817 > URL: https://issues.apache.org/jira/browse/SPARK-35817 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 3.2.0 > Reporter: Bruce Robbins > Priority: Major > > A query against an Avro table can be quite slow when all are true: > - There are many columns in the Avro file > - The query contains a wide projection > - There are many splits in the input > - Some of the splits are read serially (e.g., less executors than there are > tasks) > A write to an Avro table can be quite slow when all are true: > - There are many columns in the new rows > - The operation is creating many files > For example, a single-threaded query against a 6000 column Avro data set with > 50K rows and 20 files takes less than a minute with Spark 3.0.1 but over 7 > minutes with Spark 3.2.0-SNAPSHOT. > The culprit appears to be this line of code: > https://github.com/apache/spark/blob/3fb044e043a2feab01d79b30c25b93d4fd166b12/external/avro/src/main/scala/org/apache/spark/sql/avro/AvroUtils.scala#L226 > For each split, AvroDeserializer will call this function once for each column > in the projection, resulting in a potential n^2 lookup per split. > For each file, AvroSerializer will call this function once for each column, > resulting in an n^2 lookup per file. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org