chenhao-db commented on code in PR #47310:
URL: https://github.com/apache/spark/pull/47310#discussion_r1678874248


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##########
@@ -4367,6 +4367,14 @@ object SQLConf {
       .booleanConf
       .createWithDefault(true)
 
+  val JSON_USE_UNSAFE_ROW =
+    buildConf("spark.sql.json.useUnsafeRow")
+      .internal()
+      .doc("When set to true, use UnsafeRow to represent struct result in the 
JSON parser.")
+      .version("4.0.0")
+      .booleanConf
+      .createWithDefault(false)

Review Comment:
   I feel it is better to not enable it by default. The benchmark has shown 
that applying the change can slow down the JSON scan a bit. If there is enough 
memory, then saving memory doesn't have enough benefit.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to