[ https://issues.apache.org/jira/browse/SPARK-20184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15974682#comment-15974682 ]
Kazuaki Ishizaki commented on SPARK-20184: ------------------------------------------ I succeeded to reproduce this... {code} % git log | head -2 commit 773754b6c1516c15b64846a00e491535cbcb1007 Author: Liang-Chi Hsieh <vii...@gmail.com> % bin/spark-submit --driver-memory 16g --class org.apache.spark.sql.execution.benchmark.SPARK20184 sql/core/target/spark-sql_2.11-2.2.0-SNAPSHOT-tests.jar ... OpenJDK 64-Bit Server VM 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13 on Linux 4.4.0-66-generic Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz SPARK-20184: Best/Avg Time(ms) Rate(M/s) Per Row(ns) Relative ------------------------------------------------------------------------------------------------ codegen = T 2840 / 3008 0.0 2839940132.0 1.0X codegen = F 792 / 841 0.0 792284833.0 3.6X {code} {code} package org.apache.spark.sql.execution.benchmark import org.apache.spark.SparkConf import org.apache.spark.sql.SparkSession import org.apache.spark.util.Benchmark object SPARK20184 { val conf = new SparkConf() .setMaster("local[1]") .setAppName("test") .set("spark.driver.memory", "16g") val spark = SparkSession.builder.config(conf).getOrCreate() import spark.implicits._ def main(args: Array[String]): Unit = { val df = (1 to 500000) .map(x => (x.toString, x.toString, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x)) .toDF("dim_1", "dim_2", "c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10", "c11", "c12", "c13", "c14", "c15", "c16", "c17", "c18", "c19", "c20") df.write.saveAsTable("sum_table_50w_3") val query = "select dim_1, dim_2," + "sum(c1), sum(c2), sum(c3), sum(c4), sum(c5), sum(c6), sum(c7), sum(c8), sum(c9), sum(c10)," + "sum(c11), sum(c12), sum(c13), sum(c14), sum(c15), sum(c16), sum(c17), sum(c18), sum(c19)," + "sum(c20) from sum_table_50w_3 group by dim_1, dim_2 limit 100" val benchmark = new Benchmark("SPARK-20184", 1, 20, outputPerIteration = true) benchmark.addCase("codegen = T") { i => spark.conf.set("spark.sql.codegen.wholeStage", "true") spark.sql(query).collect } benchmark.addCase("codegen = F") { i => spark.conf.set("spark.sql.codegen.wholeStage", "false") spark.sql(query).collect } benchmark.run() } } {code} > performance regression for complex/long sql when enable whole stage codegen > --------------------------------------------------------------------------- > > Key: SPARK-20184 > URL: https://issues.apache.org/jira/browse/SPARK-20184 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 1.6.0, 2.1.0 > Reporter: Fei Wang > > The performance of following SQL get much worse in spark 2.x in contrast > with codegen off. > SELECT > sum(COUNTER_57) > ,sum(COUNTER_71) > ,sum(COUNTER_3) > ,sum(COUNTER_70) > ,sum(COUNTER_66) > ,sum(COUNTER_75) > ,sum(COUNTER_69) > ,sum(COUNTER_55) > ,sum(COUNTER_63) > ,sum(COUNTER_68) > ,sum(COUNTER_56) > ,sum(COUNTER_37) > ,sum(COUNTER_51) > ,sum(COUNTER_42) > ,sum(COUNTER_43) > ,sum(COUNTER_1) > ,sum(COUNTER_76) > ,sum(COUNTER_54) > ,sum(COUNTER_44) > ,sum(COUNTER_46) > ,DIM_1 > ,DIM_2 > ,DIM_3 > FROM aggtable group by DIM_1, DIM_2, DIM_3 limit 100; > Num of rows of aggtable is about 35000000. > whole stage codegen on(spark.sql.codegen.wholeStage = true): 40s > whole stage codegen off(spark.sql.codegen.wholeStage = false): 6s > After some analysis i think this is related to the huge java method(a java > method of thousand lines) which generated by codegen. > And If i config -XX:-DontCompileHugeMethods the performance get much > better(about 7s). -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org