Github user kiszk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22847#discussion_r229577345
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -812,6 +812,17 @@ object SQLConf {
         .intConf
         .createWithDefault(65535)
     
    +  val CODEGEN_METHOD_SPLIT_THRESHOLD = 
buildConf("spark.sql.codegen.methodSplitThreshold")
    +    .internal()
    +    .doc("The threshold of source code length without comment of a single 
Java function by " +
    +      "codegen to be split. When the generated Java function source code 
exceeds this threshold" +
    +      ", it will be split into multiple small functions. We can't know how 
many bytecode will " +
    +      "be generated, so use the code length as metric. A function's 
bytecode should not go " +
    +      "beyond 8KB, otherwise it will not be JITted; it also should not be 
too small, otherwise " +
    +      "there will be many function calls.")
    +    .intConf
    --- End diff --
    
    1000 is conservative. But, there is no recommendation since the bytecode 
size depends on the content (e.g. `0`'s byte code length is 1. `9`'s byte code 
lengh is 2).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to