This is an automated email from the ASF dual-hosted git repository.

blue pushed a commit to branch 0.13.1
in repository https://gitbox.apache.org/repos/asf/iceberg-docs.git


The following commit(s) were added to refs/heads/0.13.1 by this push:
     new ddce65c  Update Spark section on inspecting tables. (#66)
ddce65c is described below

commit ddce65c483deae73d2c10e01bf3c249983025a1c
Author: Wing Yew Poon <[email protected]>
AuthorDate: Tue Jun 7 08:19:47 2022 -0700

    Update Spark section on inspecting tables. (#66)
    
    Using the SparkSessionCatalog to query metadata tables is possible in Spark 
3.2.
---
 docs/content/docs/spark/spark-queries.md | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/docs/content/docs/spark/spark-queries.md 
b/docs/content/docs/spark/spark-queries.md
index e2960bf..ab80362 100644
--- a/docs/content/docs/spark/spark-queries.md
+++ b/docs/content/docs/spark/spark-queries.md
@@ -168,7 +168,9 @@ To inspect a table's history, snapshots, and other 
metadata, Iceberg supports me
 Metadata tables are identified by adding the metadata table name after the 
original table name. For example, history for `db.table` is read using 
`db.table.history`.
 
 {{< hint info >}}
-As of Spark 3.0, the format of the table name for inspection 
(`catalog.database.table.metadata`) doesn't work with Spark's default catalog 
(`spark_catalog`). If you've replaced the default catalog, you may want to use 
`DataFrameReader` API to inspect the table. 
+For Spark 2.4, use the `DataFrameReader` API to [inspect 
tables](#inspecting-with-dataframes).
+
+For Spark 3, prior to 3.2, the Spark [session 
catalog](../spark-configuration#replacing-the-session-catalog) does not support 
table names with multipart identifiers such as 
`catalog.database.table.metadata`. As a workaround, configure an 
`org.apache.iceberg.spark.SparkCatalog`, or use the Spark `DataFrameReader` API.
 {{< /hint >}}
 
 ### History

Reply via email to