[ https://issues.apache.org/jira/browse/SPARK-32628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17178511#comment-17178511 ]
Yuming Wang commented on SPARK-32628: ------------------------------------- Benchmark: {code:scala} spark.range(20000000000L) .select(col("id"), col("id").%(2000).as("k")) .write .partitionBy("k") .mode("overwrite") .saveAsTable("df1") spark.range(2000000000L) .select(col("id"), col("id").as("k")) .write .mode("overwrite") .saveAsTable("df2") spark.sql("CREATE TABLE t_result1 USING parquet as SELECT df1.id, df2.k FROM df1 JOIN df2 ON df1.k = df2.k AND df2.id > 1500 AND df2.id < 1000000000L") {code} > Use bloom filter to improve dynamicPartitionPruning > --------------------------------------------------- > > Key: SPARK-32628 > URL: https://issues.apache.org/jira/browse/SPARK-32628 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 3.1.0 > Reporter: Yuming Wang > Priority: Major > > It will throw exception when > {{spark.sql.optimizer.dynamicPartitionPruning.reuseBroadcastOnly}} is > disabled: > {code:sql} > select catalog_sales.* from catalog_sales join catalog_returns where > cr_order_number = cs_sold_date_sk and cr_returned_time_sk < 40000; > {code} > {noformat} > 20/08/16 06:44:42 ERROR TaskSetManager: Total size of serialized results of > 494 tasks (1225.3 MiB) is bigger than spark.driver.maxResultSize (1024.0 MiB) > {noformat} > We can improve it with minimum, maximum and Bloom filter to reduce serialized > results. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org