Github user kiszk commented on a diff in the pull request: https://github.com/apache/spark/pull/22847#discussion_r228598058 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala --- @@ -812,6 +812,17 @@ object SQLConf { .intConf .createWithDefault(65535) + val CODEGEN_METHOD_SPLIT_THRESHOLD = buildConf("spark.sql.codegen.methodSplitThreshold") + .internal() + .doc("The maximum source code length of a single Java function by codegen. When the " + + "generated Java function source code exceeds this threshold, it will be split into " + + "multiple small functions, each function length is spark.sql.codegen.methodSplitThreshold." + --- End diff -- IMHO, `spark.sql.codegen.methodSplitThreshold` can be used, but the description should be changed. For example, `The threshold of source code length without comment of a single Java function by codegen to be split. When the generated Java function source code exceeds this threshold, it will be split into multiple small functions. ...`
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org