jackylee-ch commented on code in PR #15736:
URL: https://github.com/apache/iceberg/pull/15736#discussion_r2993686556


##########
docs/docs/spark-configuration.md:
##########
@@ -112,6 +112,14 @@ Spark's built-in catalog supports existing v1 and v2 
tables tracked in a Hive Me
 
 This configuration can use same Hive Metastore for both Iceberg and 
non-Iceberg tables.
 
+`SparkSessionCatalog` is useful when you want `spark_catalog` to work with 
both Iceberg and non-Iceberg
+tables in the same metastore. It is not a full replacement for a dedicated 
Iceberg catalog, though.

Review Comment:
   done



##########
docs/docs/spark-queries.md:
##########
@@ -51,6 +51,14 @@ writing filters that match Iceberg partition transforms. 
These functions are ava
 [Iceberg catalog](spark-configuration.md#catalog-configuration); they are not 
registered in Spark's
 built-in catalog.
 
+!!! note
+    In Spark versions before 4.2.0, `SparkSessionCatalog` does not expose 
Iceberg's `system`
+    namespace (see SPARK-54760). Queries such as `SELECT 
spark_catalog.system.bucket(16, id)`

Review Comment:
   I have added the issue and github link in the doc.



##########
docs/docs/spark-configuration.md:
##########
@@ -112,6 +112,13 @@ Spark's built-in catalog supports existing v1 and v2 
tables tracked in a Hive Me
 
 This configuration can use same Hive Metastore for both Iceberg and 
non-Iceberg tables.
 
+`SparkSessionCatalog` is useful when you want `spark_catalog` to work with 
both Iceberg and non-Iceberg
+tables in the same metastore. It is not a full replacement for a dedicated 
Iceberg catalog, though.

Review Comment:
   Oh, got it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to