Github user rednaxelafx commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20434#discussion_r164687283
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -660,12 +660,10 @@ object SQLConf {
       val WHOLESTAGE_HUGE_METHOD_LIMIT = 
buildConf("spark.sql.codegen.hugeMethodLimit")
         .internal()
         .doc("The maximum bytecode size of a single compiled Java function 
generated by whole-stage " +
    -      "codegen. When the compiled function exceeds this threshold, " +
    -      "the whole-stage codegen is deactivated for this subtree of the 
current query plan. " +
    -      s"The default value is 
${CodeGenerator.DEFAULT_JVM_HUGE_METHOD_LIMIT} and " +
    -      "this is a limit in the OpenJDK JVM implementation.")
    --- End diff --
    
    The 8000 byte limit is a HotSpot-specific thing, but the 64KB limit is 
imposed by the Java Class File format, as a part of the JVM spec.
    
    We may want to wordsmith a bit here to explain that:
    1. 65535 is a largest bytecode size possible for a valid Java method; 
setting the default value to 65535 is effectively turning the limit off for 
whole-stage codegen;
    2. For those that wish to turn this limit on when running on HotSpot, it 
may be preferable to set the value to 
`CodeGenerator.DEFAULT_JVM_HUGE_METHOD_LIMIT` to match HotSpot's implementation.
    
    I don't have a good concrete suggestion as to how to concisely expression 
these two points in the doc string, though.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to