This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
     new a1471f9  [SPARK-28977][DOCS][SQL] Fix DataFrameReader.json docs to doc 
that partition column can be numeric, date or timestamp type
a1471f9 is described below

commit a1471f95a4b2ff2ff5c70403458d796429ee857a
Author: Sean Owen <sean.o...@databricks.com>
AuthorDate: Thu Sep 5 18:32:45 2019 +0900

    [SPARK-28977][DOCS][SQL] Fix DataFrameReader.json docs to doc that 
partition column can be numeric, date or timestamp type
    
    ### What changes were proposed in this pull request?
    
    `DataFrameReader.json()` accepts a partition column that is of numeric, 
date or timestamp type, according to the implementation in 
`JDBCRelation.scala`. Update the scaladoc accordingly, to match the 
documentation in `sql-data-sources-jdbc.md` too.
    
    ### Why are the changes needed?
    
    scaladoc is incorrect.
    
    ### Does this PR introduce any user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    N/A
    
    Closes #25687 from srowen/SPARK-28977.
    
    Authored-by: Sean Owen <sean.o...@databricks.com>
    Signed-off-by: HyukjinKwon <gurwls...@apache.org>
---
 R/pkg/R/SQLContext.R                                               | 3 ++-
 python/pyspark/sql/readwriter.py                                   | 3 ++-
 sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/R/pkg/R/SQLContext.R b/R/pkg/R/SQLContext.R
index c819a7d..c281aa0 100644
--- a/R/pkg/R/SQLContext.R
+++ b/R/pkg/R/SQLContext.R
@@ -655,7 +655,8 @@ loadDF <- function(x = NULL, ...) {
 #'
 #' @param url JDBC database url of the form \code{jdbc:subprotocol:subname}
 #' @param tableName the name of the table in the external database
-#' @param partitionColumn the name of a column of integral type that will be 
used for partitioning
+#' @param partitionColumn the name of a column of numeric, date, or timestamp 
type
+#'                        that will be used for partitioning.
 #' @param lowerBound the minimum value of \code{partitionColumn} used to 
decide partition stride
 #' @param upperBound the maximum value of \code{partitionColumn} used to 
decide partition stride
 #' @param numPartitions the number of partitions, This, along with 
\code{lowerBound} (inclusive),
diff --git a/python/pyspark/sql/readwriter.py b/python/pyspark/sql/readwriter.py
index ea7cc80..4396699 100644
--- a/python/pyspark/sql/readwriter.py
+++ b/python/pyspark/sql/readwriter.py
@@ -526,7 +526,8 @@ class DataFrameReader(OptionUtils):
 
         :param url: a JDBC URL of the form ``jdbc:subprotocol:subname``
         :param table: the name of the table
-        :param column: the name of an integer column that will be used for 
partitioning;
+        :param column: the name of a column of numeric, date, or timestamp type
+                       that will be used for partitioning;
                        if this parameter is specified, then ``numPartitions``, 
``lowerBound``
                        (inclusive), and ``upperBound`` (exclusive) will form 
partition strides
                        for generated WHERE clause expressions used to split 
the column
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
index 85cd3f0..c71f871 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
@@ -248,7 +248,8 @@ class DataFrameReader private[sql](sparkSession: 
SparkSession) extends Logging {
    *
    * @param url JDBC database url of the form `jdbc:subprotocol:subname`.
    * @param table Name of the table in the external database.
-   * @param columnName the name of a column of integral type that will be used 
for partitioning.
+   * @param columnName the name of a column of numeric, date, or timestamp type
+   *                   that will be used for partitioning.
    * @param lowerBound the minimum value of `columnName` used to decide 
partition stride.
    * @param upperBound the maximum value of `columnName` used to decide 
partition stride.
    * @param numPartitions the number of partitions. This, along with 
`lowerBound` (inclusive),


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to