This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new dc7324e  [SPARK-31365][SQL][FOLLOWUP] Refine config document for 
nested predicate pushdown
dc7324e is described below

commit dc7324e5e39783995b90e64d4737127c10a210cf
Author: Liang-Chi Hsieh <vii...@gmail.com>
AuthorDate: Thu May 7 09:57:08 2020 +0900

    [SPARK-31365][SQL][FOLLOWUP] Refine config document for nested predicate 
pushdown
    
    ### What changes were proposed in this pull request?
    
    This is a followup to address the 
https://github.com/apache/spark/pull/28366#discussion_r420611872 by refining 
the SQL config document.
    
    ### Why are the changes needed?
    
    Make developers less confusing.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No
    
    ### How was this patch tested?
    
    Only doc change.
    
    Closes #28468 from viirya/SPARK-31365-followup.
    
    Authored-by: Liang-Chi Hsieh <vii...@gmail.com>
    Signed-off-by: Takeshi Yamamuro <yamam...@apache.org>
    (cherry picked from commit 9bf738724a3895551464d8ba0d455bc90868983f)
    Signed-off-by: Takeshi Yamamuro <yamam...@apache.org>
---
 .../src/main/scala/org/apache/spark/sql/internal/SQLConf.scala         | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index 8d673c5..6c18280 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -2070,7 +2070,8 @@ object SQLConf {
       .internal()
       .doc("A comma-separated list of data source short names or fully 
qualified data source " +
         "implementation class names for which Spark tries to push down 
predicates for nested " +
-        "columns and/or names containing `dots` to data sources. Currently, 
Parquet implements " +
+        "columns and/or names containing `dots` to data sources. This 
configuration is only " +
+        "effective with file-based data source in DSv1. Currently, Parquet 
implements " +
         "both optimizations while ORC only supports predicates for names 
containing `dots`. The " +
         "other data sources don't support this feature yet. So the default 
value is 'parquet,orc'.")
       .version("3.0.0")


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to