This is an automated email from the ASF dual-hosted git repository.

szehon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/iceberg.git


The following commit(s) were added to refs/heads/master by this push:
     new cdedd29  Docs: Update section on inspecting tables (#4255)
cdedd29 is described below

commit cdedd29dd034b52c33840e0c0afeb5542e202383
Author: Wing Yew Poon <[email protected]>
AuthorDate: Wed Mar 16 16:55:43 2022 -0700

    Docs: Update section on inspecting tables (#4255)
---
 docs/spark/spark-queries.md | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/docs/spark/spark-queries.md b/docs/spark/spark-queries.md
index 8f139c5..a8b885b 100644
--- a/docs/spark/spark-queries.md
+++ b/docs/spark/spark-queries.md
@@ -168,7 +168,9 @@ To inspect a table's history, snapshots, and other 
metadata, Iceberg supports me
 Metadata tables are identified by adding the metadata table name after the 
original table name. For example, history for `db.table` is read using 
`db.table.history`.
 
 {{< hint info >}}
-As of Spark 3.0, the format of the table name for inspection 
(`catalog.database.table.metadata`) doesn't work with Spark's default catalog 
(`spark_catalog`). If you've replaced the default catalog, you may want to use 
`DataFrameReader` API to inspect the table. 
+For Spark 2.4, use the `DataFrameReader` API to [inspect 
tables](#inspecting-with-dataframes).
+
+For Spark 3, prior to 3.2, the Spark [session 
catalog](../spark-configuration#replacing-the-session-catalog) does not support 
table names with multipart identifiers such as 
`catalog.database.table.metadata`. As a workaround, configure an 
`org.apache.iceberg.spark.SparkCatalog`, or use the Spark `DataFrameReader` API.
 {{< /hint >}}
 
 ### History

Reply via email to