This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-paimon.git


The following commit(s) were added to refs/heads/master by this push:
     new 4ba8378c2 [doc] Add PaimonSparkSessionExtensions to spark
4ba8378c2 is described below

commit 4ba8378c28a2798bab36cc10f6f5d6b3c9557dc8
Author: Jingsong <[email protected]>
AuthorDate: Tue Dec 19 17:03:46 2023 +0800

    [doc] Add PaimonSparkSessionExtensions to spark
---
 docs/content/engines/spark.md | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/docs/content/engines/spark.md b/docs/content/engines/spark.md
index 3b7e6e8c7..765644cb7 100644
--- a/docs/content/engines/spark.md
+++ b/docs/content/engines/spark.md
@@ -99,7 +99,8 @@ When starting `spark-sql`, use the following command to 
register Paimon’s Spar
 ```bash
 spark-sql ... \
     --conf spark.sql.catalog.paimon=org.apache.paimon.spark.SparkCatalog \
-    --conf spark.sql.catalog.paimon.warehouse=file:/tmp/paimon
+    --conf spark.sql.catalog.paimon.warehouse=file:/tmp/paimon \
+    --conf 
spark.sql.extensions=org.apache.paimon.spark.extensions.PaimonSparkSessionExtensions
 ```
 
 Catalogs are configured using properties under 
spark.sql.catalog.(catalog_name). In above case, 'paimon' is the
@@ -127,7 +128,8 @@ Hive conf from Spark session, you just need to configure 
Spark's Hive conf.
 
 ```bash
 spark-sql ... \
-    --conf 
spark.sql.catalog.spark_catalog=org.apache.paimon.spark.SparkGenericCatalog
+    --conf 
spark.sql.catalog.spark_catalog=org.apache.paimon.spark.SparkGenericCatalog \
+    --conf 
spark.sql.extensions=org.apache.paimon.spark.extensions.PaimonSparkSessionExtensions
 ```
 
 Using `SparkGenericCatalog`, you can use Paimon tables in this Catalog or 
non-Paimon tables such as Spark's csv,

Reply via email to