Github user rednaxelafx commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22847#discussion_r229943260
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -812,6 +812,17 @@ object SQLConf {
         .intConf
         .createWithDefault(65535)
     
    +  val CODEGEN_METHOD_SPLIT_THRESHOLD = 
buildConf("spark.sql.codegen.methodSplitThreshold")
    +    .internal()
    +    .doc("The threshold of source code length without comment of a single 
Java function by " +
    +      "codegen to be split. When the generated Java function source code 
exceeds this threshold" +
    +      ", it will be split into multiple small functions. We can't know how 
many bytecode will " +
    +      "be generated, so use the code length as metric. A function's 
bytecode should not go " +
    +      "beyond 8KB, otherwise it will not be JITted; it also should not be 
too small, otherwise " +
    +      "there will be many function calls.")
    +    .intConf
    --- End diff --
    
    Oh I see, you're using the column name...that's not the right place to put 
the "prefix". Column names are almost never carried over to the generated code 
in the current framework (the only exception is the lambda variable name).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to