spark git commit: [SPARK-22221][SQL][FOLLOWUP] Externalize spark.sql.execution.arrow.maxRecordsPerBatch

2018-01-29 Thread lixiao
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 75131ee86 -> 2858eaafa


[SPARK-1][SQL][FOLLOWUP] Externalize 
spark.sql.execution.arrow.maxRecordsPerBatch

## What changes were proposed in this pull request?

This is a followup to #19575 which added a section on setting max Arrow record 
batches and this will externalize the conf that was referenced in the docs.

## How was this patch tested?
NA

Author: Bryan Cutler 

Closes #20423 from 
BryanCutler/arrow-user-doc-externalize-maxRecordsPerBatch-SPARK-1.

(cherry picked from commit f235df66a4754cbb64d5b7b5cfd5a52bdd243b8a)
Signed-off-by: gatorsmile 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2858eaaf
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2858eaaf
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/2858eaaf

Branch: refs/heads/branch-2.3
Commit: 2858eaafaf06d3b8c55a8a5ed7831260244932cd
Parents: 75131ee
Author: Bryan Cutler 
Authored: Mon Jan 29 17:37:55 2018 -0800
Committer: gatorsmile 
Committed: Mon Jan 29 17:38:14 2018 -0800

--
 .../src/main/scala/org/apache/spark/sql/internal/SQLConf.scala  | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/2858eaaf/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index 61ea03d..54a3559 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -1051,7 +1051,6 @@ object SQLConf {
 
   val ARROW_EXECUTION_MAX_RECORDS_PER_BATCH =
 buildConf("spark.sql.execution.arrow.maxRecordsPerBatch")
-  .internal()
   .doc("When using Apache Arrow, limit the maximum number of records that 
can be written " +
 "to a single ArrowRecordBatch in memory. If set to zero or negative 
there is no limit.")
   .intConf


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-22221][SQL][FOLLOWUP] Externalize spark.sql.execution.arrow.maxRecordsPerBatch

2018-01-29 Thread lixiao
Repository: spark
Updated Branches:
  refs/heads/master b834446ec -> f235df66a


[SPARK-1][SQL][FOLLOWUP] Externalize 
spark.sql.execution.arrow.maxRecordsPerBatch

## What changes were proposed in this pull request?

This is a followup to #19575 which added a section on setting max Arrow record 
batches and this will externalize the conf that was referenced in the docs.

## How was this patch tested?
NA

Author: Bryan Cutler 

Closes #20423 from 
BryanCutler/arrow-user-doc-externalize-maxRecordsPerBatch-SPARK-1.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f235df66
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f235df66
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f235df66

Branch: refs/heads/master
Commit: f235df66a4754cbb64d5b7b5cfd5a52bdd243b8a
Parents: b834446
Author: Bryan Cutler 
Authored: Mon Jan 29 17:37:55 2018 -0800
Committer: gatorsmile 
Committed: Mon Jan 29 17:37:55 2018 -0800

--
 .../src/main/scala/org/apache/spark/sql/internal/SQLConf.scala  | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/f235df66/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index 61ea03d..54a3559 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -1051,7 +1051,6 @@ object SQLConf {
 
   val ARROW_EXECUTION_MAX_RECORDS_PER_BATCH =
 buildConf("spark.sql.execution.arrow.maxRecordsPerBatch")
-  .internal()
   .doc("When using Apache Arrow, limit the maximum number of records that 
can be written " +
 "to a single ArrowRecordBatch in memory. If set to zero or negative 
there is no limit.")
   .intConf


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org