eric-maynard commented on code in PR #1488:
URL: https://github.com/apache/polaris/pull/1488#discussion_r2067155584


##########
plugins/spark/README.md:
##########
@@ -30,11 +30,66 @@ and depends on iceberg-spark-runtime 1.8.1.
 
 # Build Plugin Jar
 A task createPolarisSparkJar is added to build a jar for the Polaris Spark 
plugin, the jar is named as:
-"polaris-iceberg-<iceberg_version>-spark-runtime-<spark_major_version>_<scala_version>.jar"
+The result jar is located at plugins/spark/v3.5/build/<scala_version>/libs 
after the build.
 
-Building the Polaris project produces client jars for both Scala 2.12 and 
2.13, and CI runs the Spark 
-client tests for both Scala versions as well.
+# Start Spark with Local Polaris Service using built Jar
+Once the jar is built, we can manually test it with Spark and a local Polaris 
service.
 
-The Jar can also be built alone with a specific version using target 
`:polaris-spark-3.5_<scala_version>`. For example:
-- `./gradlew :polaris-spark-3.5_2.12:createPolarisSparkJar` - Build a jar for 
the Polaris Spark plugin with scala version 2.12.
-The result jar is located at plugins/spark/build/<scala_version>/libs after 
the build.
+The following command starts a Polaris server for local testing, it runs on 
localhost:8181 with default
+realm `POLARIS` and root credentials `root:secret`:
+```shell
+./gradlew run
+```
+
+Once the local server is running, the following command can be used to start 
the spark-shell with the built Spark client
+jar, and to use the local Polaris server as a Catalog.
+
+```shell
+bin/spark-shell \
+--jars <path-to-spark-client-jar> \
+--packages org.apache.hadoop:hadoop-aws:3.4.0,io.delta:delta-spark_2.12:3.3.1 \
+--conf 
spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,io.delta.sql.DeltaSparkSessionExtension
 \
+--conf 
spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog 
\
+--conf spark.sql.catalog.<catalog-name>.warehouse=<catalog-name> \
+--conf 
spark.sql.catalog.<catalog-name>.header.X-Iceberg-Access-Delegation=true \
+--conf spark.sql.catalog.<catalog-name>=org.apache.polaris.spark.SparkCatalog \
+--conf spark.sql.catalog.<catalog-name>.uri=http://localhost:8181/api/catalog \
+--conf spark.sql.catalog.<catalog-name>.credential="root:secret" \
+--conf spark.sql.catalog.<catalog-name>.scope='PRINCIPAL_ROLE:ALL' \
+--conf spark.sql.catalog.<catalog-name>.token-refresh-enabled=true \
+--conf spark.sql.catalog.<catalog-name>.type=rest \
+--conf spark.sql.sources.useV1SourceList=''
+```
+
+Assume the path to the built Spark client jar is
+`/polaris/plugins/spark/v3.5/spark/build/2.12/libs/polaris-iceberg-1.8.1-spark-runtime-3.5_2.12-0.10.0-beta-incubating-SNAPSHOT.jar`
+and the name of the catalog is `polaris`. The cli command will look like 
following:
+
+```shell
+bin/spark-shell \
+--jars 
/polaris/plugins/spark/v3.5/spark/build/2.12/libs/polaris-iceberg-1.8.1-spark-runtime-3.5_2.12-0.10.0-beta-incubating-SNAPSHOT.jar
 \
+--packages org.apache.hadoop:hadoop-aws:3.4.0,io.delta:delta-spark_2.12:3.3.1 \
+--conf 
spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,io.delta.sql.DeltaSparkSessionExtension
 \
+--conf 
spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog 
\
+--conf spark.sql.catalog.polaris.warehouse=<catalog-name> \
+--conf spark.sql.catalog.polaris.header.X-Iceberg-Access-Delegation=true \
+--conf spark.sql.catalog.polaris=org.apache.polaris.spark.SparkCatalog \
+--conf spark.sql.catalog.polaris.uri=http://localhost:8181/api/catalog \
+--conf spark.sql.catalog.polaris.credential="root:secret" \
+--conf spark.sql.catalog.polaris.scope='PRINCIPAL_ROLE:ALL' \
+--conf spark.sql.catalog.polaris.token-refresh-enabled=true \
+--conf spark.sql.catalog.polaris.type=rest \
+--conf spark.sql.sources.useV1SourceList=''
+```
+
+# Limitations
+The Polaris Spark client supports catalog management for both Iceberg and 
None-Iceberg tables, it routes all Iceberg
+table requests to the Iceberg REST endpoints, and routes none-iceberg table 
requests to the Generic Table REST endpoints.
+
+Following describes the current limitations of the Polaris Spark client:
+1) Create table as select (CTAS) is not supported for Delta tables. As a 
result, the `saveAsTable` method of `Dataframe`
+   is also not supported, since it relies on the CTAS support.
+2) Create a non-iceberg table without explicit location is not supported.
+3) Rename a table for non-iceberg table is not supported.

Review Comment:
   From the perspective of the Spark user there's no such thing as a generic 
table I guess. They either create Iceberg or Delta tables which are stored in 
Polaris, everything else is transparent.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to