[GitHub] spark pull request #19083: [SPARK-21871][SQL] Check actual bytecode size whe...

2017-10-01 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/spark/pull/19083#discussion_r142059351
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala ---
@@ -432,25 +432,31 @@ class DataFrameAggregateSuite extends QueryTest with 
SharedSQLContext {
   }
 
   test("zero moments") {
-val input = Seq((1, 2)).toDF("a", "b")
-checkAnswer(
-  input.agg(stddev('a), stddev_samp('a), stddev_pop('a), variance('a),
-var_samp('a), var_pop('a), skewness('a), kurtosis('a)),
-  Row(Double.NaN, Double.NaN, 0.0, Double.NaN, Double.NaN, 0.0,
-Double.NaN, Double.NaN))
+// In SPARK-21871, we added code to check the actual bytecode size of 
gen'd methods. If the size
+// goes over `hugeMethodLimit`, Spark fails to compile the methods and 
the execution also fails
+// in a test mode. So, we explicitly made this threshold higher here.
+// This workaround can be removed if this issue fixed.
+withSQLConf(SQLConf.CODEGEN_HUGE_METHOD_LIMIT.key -> "16000") {
--- End diff --

Do you think is it ok to silently fall back to non-codegen cases in tests? 
I though that the too-long function cases also should explicitly throws 
exceptions in tests by checking `!Utils.isTesting && 
sqlContext.conf.codegenFallback`.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19083: [SPARK-21871][SQL] Check actual bytecode size whe...

2017-10-01 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/spark/pull/19083#discussion_r142059729
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala ---
@@ -432,25 +432,31 @@ class DataFrameAggregateSuite extends QueryTest with 
SharedSQLContext {
   }
 
   test("zero moments") {
-val input = Seq((1, 2)).toDF("a", "b")
-checkAnswer(
-  input.agg(stddev('a), stddev_samp('a), stddev_pop('a), variance('a),
-var_samp('a), var_pop('a), skewness('a), kurtosis('a)),
-  Row(Double.NaN, Double.NaN, 0.0, Double.NaN, Double.NaN, 0.0,
-Double.NaN, Double.NaN))
+// In SPARK-21871, we added code to check the actual bytecode size of 
gen'd methods. If the size
+// goes over `hugeMethodLimit`, Spark fails to compile the methods and 
the execution also fails
+// in a test mode. So, we explicitly made this threshold higher here.
+// This workaround can be removed if this issue fixed.
+withSQLConf(SQLConf.CODEGEN_HUGE_METHOD_LIMIT.key -> "16000") {
--- End diff --

sorry, my bad. I'll do so. Thanks!


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



<    1   2