This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 05c48e7450a9 [MINOR][DOCS] Fix rst link in Python API docs for .sql()
05c48e7450a9 is described below

commit 05c48e7450a9d7b0d41035627d2036699ccb8f21
Author: Nicholas Chammas <nicholas.cham...@gmail.com>
AuthorDate: Tue Dec 26 14:53:14 2023 +0900

    [MINOR][DOCS] Fix rst link in Python API docs for .sql()
    
    ### What changes were proposed in this pull request?
    
    This PR fixes the rst markup for a link in the documentation for 
`pyspark.sql.SparkSession.sql` and `pyspark.pandas.sql`.
    
    ### Why are the changes needed?
    
    The current markup is incorrect.
    
    Technically, though the markup in this PR is correct, the link target is 
incorrect. We should be linking to page relative to the site root, rather than 
hardcoding a link to `/latest/`. However, I could not figure out how to do that 
in rst, and building the API docs takes a really long time, and I could not 
make it build incrementally.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes, the markup goes from looking like this:
    
    <img width="500" 
src="https://github.com/apache/spark/assets/1039369/077566a3-79df-4aa2-a0f7-d819f608f673";>
    
    To looking like this:
    
    <img width="500" 
src="https://github.com/apache/spark/assets/1039369/b1453761-3f9c-435e-89e1-cfd3748cce9c";>
    
    ### How was this patch tested?
    
    I built the docs as follows:
    
    ```
    SKIP_SCALADOC=1 SKIP_RDOC=1 SKIP_SQLDOC=1 bundle exec jekyll serve
    ```
    
    And reviewed the output in my browser.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No.
    
    Closes #44488 from nchammas/data-types-rst-link.
    
    Authored-by: Nicholas Chammas <nicholas.cham...@gmail.com>
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
---
 python/pyspark/pandas/sql_formatter.py | 6 +++---
 python/pyspark/sql/session.py          | 7 ++++---
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/python/pyspark/pandas/sql_formatter.py 
b/python/pyspark/pandas/sql_formatter.py
index 9800037016c5..7e8263f552f0 100644
--- a/python/pyspark/pandas/sql_formatter.py
+++ b/python/pyspark/pandas/sql_formatter.py
@@ -109,13 +109,13 @@ def sql(
     args : dict or list
         A dictionary of parameter names to Python objects or a list of Python 
objects
         that can be converted to SQL literal expressions. See
-        <a href="https://spark.apache.org/docs/latest/sql-ref-datatypes.html";>
-        Supported Data Types</a> for supported value types in Python.
+        `Supported Data Types`_ for supported value types in Python.
         For example, dictionary keys: "rank", "name", "birthdate";
         dictionary values: 1, "Steven", datetime.date(2023, 4, 2).
         A value can be also a `Column` of a literal or collection constructor 
functions such
         as `map()`, `array()`, `struct()`, in that case it is taken as is.
 
+        .. _Supported Data Types: 
https://spark.apache.org/docs/latest/sql-ref-datatypes.html
 
         .. versionadded:: 3.4.0
 
@@ -176,7 +176,7 @@ def sql(
     1  2
     2  3
 
-    And substitude named parameters with the `:` prefix by SQL literals.
+    And substitute named parameters with the `:` prefix by SQL literals.
 
     >>> ps.sql("SELECT * FROM range(10) WHERE id > :bound1", args={"bound1":7})
        id
diff --git a/python/pyspark/sql/session.py b/python/pyspark/sql/session.py
index 7615491a1778..10b56d006dcd 100644
--- a/python/pyspark/sql/session.py
+++ b/python/pyspark/sql/session.py
@@ -1548,13 +1548,14 @@ class SparkSession(SparkConversionMixin):
         args : dict or list
             A dictionary of parameter names to Python objects or a list of 
Python objects
             that can be converted to SQL literal expressions. See
-            <a 
href="https://spark.apache.org/docs/latest/sql-ref-datatypes.html";>
-            Supported Data Types</a> for supported value types in Python.
+            `Supported Data Types`_ for supported value types in Python.
             For example, dictionary keys: "rank", "name", "birthdate";
             dictionary or list values: 1, "Steven", datetime.date(2023, 4, 2).
             A value can be also a `Column` of a literal or collection 
constructor functions such
             as `map()`, `array()`, `struct()`, in that case it is taken as is.
 
+            .. _Supported Data Types: 
https://spark.apache.org/docs/latest/sql-ref-datatypes.html
+
             .. versionadded:: 3.4.0
 
         kwargs : dict
@@ -1631,7 +1632,7 @@ class SparkSession(SparkConversionMixin):
         |  3|  6|
         +---+---+
 
-        And substitude named parameters with the `:` prefix by SQL literals.
+        And substitute named parameters with the `:` prefix by SQL literals.
 
         >>> from pyspark.sql.functions import create_map
         >>> spark.sql(


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to