This is an automated email from the ASF dual-hosted git repository.

cws pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/iceberg.git


The following commit(s) were added to refs/heads/master by this push:
     new 1ca781b  Docs: Update for mkdocs 1.2 (#2747)
1ca781b is described below

commit 1ca781bb1836f2629913b1b630bc0a7095104cfe
Author: Ryan Blue <[email protected]>
AuthorDate: Thu Jul 1 13:36:51 2021 -0700

    Docs: Update for mkdocs 1.2 (#2747)
    
    * Docs: Fix mkdocs use_directory_urls in 1.2.
    
    * Fix broken links and update redirects.
---
 site/docs/flink.md               |  4 ++--
 site/docs/java-api-quickstart.md |  6 +++---
 site/docs/maintenance.md         | 10 +++++-----
 site/docs/spark-procedures.md    |  2 +-
 site/mkdocs.yml                  | 11 ++++++-----
 5 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/site/docs/flink.md b/site/docs/flink.md
index 34ee502..9077fda 100644
--- a/site/docs/flink.md
+++ b/site/docs/flink.md
@@ -441,7 +441,7 @@ stream.print();
 env.execute("Test Iceberg Batch Read");
 ```
 
-There are other options that we could set by Java API, please see the 
[FlinkSource#Builder](./javadoc/master/org/apache/iceberg/flink/source/FlinkSource.html).
+There are other options that we could set by Java API, please see the 
[FlinkSource#Builder](./javadoc/0.11.1/org/apache/iceberg/flink/source/FlinkSource.html).
 
 ## Writing with DataStream
 
@@ -505,7 +505,7 @@ RewriteDataFilesActionResult result = 
Actions.forTable(table)
         .execute();
 ```
 
-For more doc about options of the rewrite files action, please see 
[RewriteDataFilesAction](./javadoc/master/org/apache/iceberg/flink/actions/RewriteDataFilesAction.html)
+For more doc about options of the rewrite files action, please see 
[RewriteDataFilesAction](./javadoc/0.11.1/org/apache/iceberg/flink/actions/RewriteDataFilesAction.html)
 
 ## Future improvement.
 
diff --git a/site/docs/java-api-quickstart.md b/site/docs/java-api-quickstart.md
index 660b8dc..de8bd31 100644
--- a/site/docs/java-api-quickstart.md
+++ b/site/docs/java-api-quickstart.md
@@ -108,9 +108,9 @@ Spark uses both `HiveCatalog` and `HadoopTables` to load 
tables. Hive is used wh
 
 To read and write to tables from Spark see:
 
-* [Reading a table in Spark](./spark.md#reading-an-iceberg-table)
-* [Appending to a table in Spark](./spark.md#appending-data)
-* [Overwriting data in a table in Spark](./spark.md#overwriting-data)
+* [SQL queries in Spark](spark-queries.md#querying-with-sql)
+* [`INSERT INTO` in Spark](spark-writes.md#insert-into)
+* [`MERGE INTO` in Spark](spark-writes.md#merge-into)
 
 
 ## Schemas
diff --git a/site/docs/maintenance.md b/site/docs/maintenance.md
index 203c103..3624fe7 100644
--- a/site/docs/maintenance.md
+++ b/site/docs/maintenance.md
@@ -26,7 +26,7 @@
 
 Each write to an Iceberg table creates a new _snapshot_, or version, of a 
table. Snapshots can be used for time-travel queries, or the table can be 
rolled back to any valid snapshot.
 
-Snapshots accumulate until they are expired by the 
[`expireSnapshots`](./javadoc/master/org/apache/iceberg/Table.html#expireSnapshots--)
 operation. Regularly expiring snapshots is recommended to delete data files 
that are no longer needed, and to keep the size of table metadata small.
+Snapshots accumulate until they are expired by the 
[`expireSnapshots`](./javadoc/0.11.1/org/apache/iceberg/Table.html#expireSnapshots--)
 operation. Regularly expiring snapshots is recommended to delete data files 
that are no longer needed, and to keep the size of table metadata small.
 
 This example expires snapshots that are older than 1 day:
 
@@ -38,7 +38,7 @@ table.expireSnapshots()
      .commit();
 ```
 
-See the [`ExpireSnapshots` 
Javadoc](./javadoc/master/org/apache/iceberg/ExpireSnapshots.html) to see more 
configuration options.
+See the [`ExpireSnapshots` 
Javadoc](./javadoc/0.11.1/org/apache/iceberg/ExpireSnapshots.html) to see more 
configuration options.
 
 There is also a Spark action that can run table expiration in parallel for 
large tables:
 
@@ -83,7 +83,7 @@ Actions.forTable(table)
     .execute();
 ```
 
-See the [RemoveOrphanFilesAction 
Javadoc](./javadoc/master/org/apache/iceberg/RemoveOrphanFilesAction.html) to 
see more configuration options.
+See the [RemoveOrphanFilesAction 
Javadoc](./javadoc/0.11.1/org/apache/iceberg/actions/RemoveOrphanFilesAction.html)
 to see more configuration options.
 
 This action may take a long time to finish if you have lots of files in data 
and metadata directories. It is recommended to execute this periodically, but 
you may not need to execute this often.
 
@@ -119,7 +119,7 @@ Actions.forTable(table).rewriteDataFiles()
 
 The `files` metadata table is useful for inspecting data file sizes and 
determining when to compact partitons.
 
-See the [`RewriteDataFilesAction` 
Javadoc](./javadoc/master/org/apache/iceberg/RewriteDataFilesAction.html) to 
see more configuration options.
+See the [`RewriteDataFilesAction` 
Javadoc](./javadoc/0.11.1/org/apache/iceberg/actions/RewriteDataFilesAction.html)
 to see more configuration options.
 
 ### Rewrite manifests
 
@@ -139,4 +139,4 @@ table.rewriteManifests()
     .commit();
 ```
 
-See the [`RewriteManifestsAction` 
Javadoc](./javadoc/master/org/apache/iceberg/RewriteManifestsAction.html) to 
see more configuration options.
+See the [`RewriteManifestsAction` 
Javadoc](./javadoc/0.11.1/org/apache/iceberg/actions/RewriteManifestsAction.html)
 to see more configuration options.
diff --git a/site/docs/spark-procedures.md b/site/docs/spark-procedures.md
index 39247f5..6e190dc 100644
--- a/site/docs/spark-procedures.md
+++ b/site/docs/spark-procedures.md
@@ -246,7 +246,7 @@ Rewrite manifests for a table to optimize scan planning.
 
 Data files in manifests are sorted by fields in the partition spec. This 
procedure runs in parallel using a Spark job.
 
-See the [`RewriteManifestsAction` 
Javadoc](./javadoc/master/org/apache/iceberg/actions/RewriteManifestsAction.html)
+See the [`RewriteManifestsAction` 
Javadoc](./javadoc/0.11.1/org/apache/iceberg/actions/RewriteManifestsAction.html)
 to see more configuration options.
 
 **Note** this procedure invalidates all cached Spark plans that reference the 
affected table.
diff --git a/site/mkdocs.yml b/site/mkdocs.yml
index 73f9167..dbda3f3 100644
--- a/site/mkdocs.yml
+++ b/site/mkdocs.yml
@@ -18,6 +18,7 @@
 #
 
 site_name: Apache Iceberg
+site_url: https://iceberg.apache.org/
 site_description: A table format for large, slow-moving tabular data
 
 remote_name: apache
@@ -32,7 +33,10 @@ extra:
   versions:
     iceberg: 0.11.1
 plugins:
-  - redirects
+  - redirects:
+      redirect_maps:
+        'time-travel.md': 'spark-queries/#time-travel'
+        'presto.md': 'trino.md'
   - markdownextradata  
 markdown_extensions:
   - toc:
@@ -67,7 +71,7 @@ nav:
     - Writes: spark-writes.md
     - Maintenance Procedures: spark-procedures.md
     - Structured Streaming: spark-structured-streaming.md
-    - Time Travel: spark#time-travel
+    - Time Travel: spark-queries/#time-travel
   - Trino: https://trino.io/docs/current/connector/iceberg.html
   - Flink: flink.md
   - Hive: hive.md
@@ -92,6 +96,3 @@ nav:
     - Sponsors: https://www.apache.org/foundation/thanks.html
     - Donate: https://www.apache.org/foundation/sponsorship.html
     - Events: https://www.apache.org/events/current-event.html
-redirects:
-  time-travel/index: snapshots/index
-  presto/index: trino/index

Reply via email to