szehon-ho commented on code in PR #12115:
URL: https://github.com/apache/iceberg/pull/12115#discussion_r1957066734
##########
docs/docs/spark-procedures.md:
##########
@@ -972,4 +972,98 @@ CALL catalog_name.system.compute_table_stats(table =>
'my_table', snapshot_id =>
Collect statistics of the snapshot with id `snap1` of table `my_table` for
columns `col1` and `col2`
```sql
CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id
=> 'snap1', columns => array('col1', 'col2'));
-```
\ No newline at end of file
+```
+
+## Table Replication
+
+The `rewrite-table-path` procedure prepares an Iceberg table for moving or
copying to another location.
+
+### `rewrite-table-path`
+
+Stages a copy of the Iceberg table's metadata files where every absolute path
source prefix is replaced by the specified target prefix.
+This can be the starting point to fully or incrementally move or copy an
Iceberg table to a new location.
+
+!!! info
+ This procedure only stages rewritten metadata files and prepares a list of
files to copy. The actual file copy is not part of this procedure.
+
+
+| Argument Name | Required? | default
| Type | Description
|
+|--------------------|-----------|------------------------------------------------|--------|-------------------------------------------------------------------------|
+| `table` | ✔️ |
| string | Name of the table
|
+| `source_prefix` | ✔️ |
| string | The existing prefix to be replaced
|
+| `target_prefix` | ✔️ |
| string | The replacement prefix for `source_prefix`
|
+| `start_version` | | first metadata.json in table's metadata log
| string | The name or path to the chronologically first metadata.json to
rewrite |
+| `end_version` | | latest metadata.json
| string | The name or path to the chronologically last metadata.json to
rewrite |
+| `staging_location` | | new directory under table's metadata
directory | string | The output location for newly modified metadata files
|
+
+
+#### Modes of operation:
+
+- Full Rewrite:
+
+By default, the procedure operates in full rewrite mode, rewriting all
reachable metadata files. This includes metadata.json, manifest lists,
manifests, and position delete files.
Review Comment:
Nit: how about adding a sentence about the copy plan:
``` By default, the procedure operates in full rewrite mode, rewriting all
reachable metadata files (this includes metadata.json, manifest lists,
manifests, and position delete files), and returning all reachable files in the
result copy plan.```
##########
docs/docs/spark-procedures.md:
##########
@@ -972,4 +972,98 @@ CALL catalog_name.system.compute_table_stats(table =>
'my_table', snapshot_id =>
Collect statistics of the snapshot with id `snap1` of table `my_table` for
columns `col1` and `col2`
```sql
CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id
=> 'snap1', columns => array('col1', 'col2'));
-```
\ No newline at end of file
+```
+
+## Table Replication
+
+The `rewrite-table-path` procedure prepares an Iceberg table for moving or
copying to another location.
Review Comment:
Nit: recommend remove `moving` here as well. Also the title is
'replication' and not 'migration'
##########
docs/docs/spark-procedures.md:
##########
@@ -981,12 +981,10 @@ The `rewrite-table-path` procedure prepares an Iceberg
table for moving or copyi
### `rewrite-table-path`
Stages a copy of the Iceberg table's metadata files where every absolute path
source prefix is replaced by the specified target prefix.
-This can be the starting point to fully or incrementally copy an Iceberg table
under a
-source prefix to another under the target prefix.
+This can be the starting point to fully or incrementally move or copy an
Iceberg table to a new location.
Review Comment:
Nit: let's remove `move` to keep it brief ? (based on previous discussion)
##########
docs/docs/spark-procedures.md:
##########
@@ -972,4 +972,98 @@ CALL catalog_name.system.compute_table_stats(table =>
'my_table', snapshot_id =>
Collect statistics of the snapshot with id `snap1` of table `my_table` for
columns `col1` and `col2`
```sql
CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id
=> 'snap1', columns => array('col1', 'col2'));
-```
\ No newline at end of file
+```
+
+## Table Replication
+
+The `rewrite-table-path` procedure prepares an Iceberg table for moving or
copying to another location.
+
+### `rewrite-table-path`
+
+Stages a copy of the Iceberg table's metadata files where every absolute path
source prefix is replaced by the specified target prefix.
+This can be the starting point to fully or incrementally move or copy an
Iceberg table to a new location.
+
+!!! info
+ This procedure only stages rewritten metadata files and prepares a list of
files to copy. The actual file copy is not part of this procedure.
+
+
+| Argument Name | Required? | default
| Type | Description
|
+|--------------------|-----------|------------------------------------------------|--------|-------------------------------------------------------------------------|
+| `table` | ✔️ |
| string | Name of the table
|
+| `source_prefix` | ✔️ |
| string | The existing prefix to be replaced
|
+| `target_prefix` | ✔️ |
| string | The replacement prefix for `source_prefix`
|
+| `start_version` | | first metadata.json in table's metadata log
| string | The name or path to the chronologically first metadata.json to
rewrite |
+| `end_version` | | latest metadata.json
| string | The name or path to the chronologically last metadata.json to
rewrite |
+| `staging_location` | | new directory under table's metadata
directory | string | The output location for newly modified metadata files
|
+
+
+#### Modes of operation:
+
+- Full Rewrite:
+
+By default, the procedure operates in full rewrite mode, rewriting all
reachable metadata files. This includes metadata.json, manifest lists,
manifests, and position delete files.
+
+- Incremental Rewrite:
+
+If `start_version` is provided, the procedure will only rewrite metadata files
created between `start_version` and `end_version`. `end_version` defaults to
the latest metadata.json of the table.
+
+#### Output
+
+| Output Name | Type | Description
|
+|----------------------|--------|-------------------------------------------------------------------|
+| `latest_version` | string | Name of the latest metadata file rewritten
by this procedure |
+| `file_list_location` | string | Path to a csv file containing a mapping of
source to target paths |
+
+##### File List Copy Plan
+The file list contains the copy plan for all files added to the table between
`startVersion` and `endVersion`.
Review Comment:
NIt: should we put a colon in the end of the sentence, to indicate that a
list is coming up?
Also does this render as a list?
##########
docs/docs/spark-procedures.md:
##########
@@ -972,4 +972,98 @@ CALL catalog_name.system.compute_table_stats(table =>
'my_table', snapshot_id =>
Collect statistics of the snapshot with id `snap1` of table `my_table` for
columns `col1` and `col2`
```sql
CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id
=> 'snap1', columns => array('col1', 'col2'));
-```
\ No newline at end of file
+```
+
+## Table Replication
+
+The `rewrite-table-path` procedure prepares an Iceberg table for moving or
copying to another location.
+
+### `rewrite-table-path`
+
+Stages a copy of the Iceberg table's metadata files where every absolute path
source prefix is replaced by the specified target prefix.
+This can be the starting point to fully or incrementally move or copy an
Iceberg table to a new location.
+
+!!! info
+ This procedure only stages rewritten metadata files and prepares a list of
files to copy. The actual file copy is not part of this procedure.
+
+
+| Argument Name | Required? | default
| Type | Description
|
+|--------------------|-----------|------------------------------------------------|--------|-------------------------------------------------------------------------|
+| `table` | ✔️ |
| string | Name of the table
|
+| `source_prefix` | ✔️ |
| string | The existing prefix to be replaced
|
+| `target_prefix` | ✔️ |
| string | The replacement prefix for `source_prefix`
|
+| `start_version` | | first metadata.json in table's metadata log
| string | The name or path to the chronologically first metadata.json to
rewrite |
+| `end_version` | | latest metadata.json
| string | The name or path to the chronologically last metadata.json to
rewrite |
+| `staging_location` | | new directory under table's metadata
directory | string | The output location for newly modified metadata files
|
+
+
+#### Modes of operation:
+
+- Full Rewrite:
+
+By default, the procedure operates in full rewrite mode, rewriting all
reachable metadata files. This includes metadata.json, manifest lists,
manifests, and position delete files.
+
+- Incremental Rewrite:
+
+If `start_version` is provided, the procedure will only rewrite metadata files
created between `start_version` and `end_version`. `end_version` defaults to
the latest metadata.json of the table.
Review Comment:
Hm i just noticed you only mention 'end_version' default, and not
'start_version'.
How about:
```
The 'start_version' and/or 'end_version' arguments may be provided to limit
the scope to prepare an incremental copy. Only metadata files added between
'start_version' and 'end_version' will be rewritten, and only files added in
this range are returned in the copy plan.
```
##########
docs/docs/spark-procedures.md:
##########
@@ -972,4 +972,98 @@ CALL catalog_name.system.compute_table_stats(table =>
'my_table', snapshot_id =>
Collect statistics of the snapshot with id `snap1` of table `my_table` for
columns `col1` and `col2`
```sql
CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id
=> 'snap1', columns => array('col1', 'col2'));
-```
\ No newline at end of file
+```
+
+## Table Replication
+
+The `rewrite-table-path` procedure prepares an Iceberg table for moving or
copying to another location.
+
+### `rewrite-table-path`
+
+Stages a copy of the Iceberg table's metadata files where every absolute path
source prefix is replaced by the specified target prefix.
+This can be the starting point to fully or incrementally move or copy an
Iceberg table to a new location.
+
+!!! info
+ This procedure only stages rewritten metadata files and prepares a list of
files to copy. The actual file copy is not part of this procedure.
+
+
+| Argument Name | Required? | default
| Type | Description
|
+|--------------------|-----------|------------------------------------------------|--------|-------------------------------------------------------------------------|
+| `table` | ✔️ |
| string | Name of the table
|
+| `source_prefix` | ✔️ |
| string | The existing prefix to be replaced
|
+| `target_prefix` | ✔️ |
| string | The replacement prefix for `source_prefix`
|
+| `start_version` | | first metadata.json in table's metadata log
| string | The name or path to the chronologically first metadata.json to
rewrite |
+| `end_version` | | latest metadata.json
| string | The name or path to the chronologically last metadata.json to
rewrite |
+| `staging_location` | | new directory under table's metadata
directory | string | The output location for newly modified metadata files
|
+
+
+#### Modes of operation:
+
+- Full Rewrite:
+
+By default, the procedure operates in full rewrite mode, rewriting all
reachable metadata files. This includes metadata.json, manifest lists,
manifests, and position delete files.
+
+- Incremental Rewrite:
+
+If `start_version` is provided, the procedure will only rewrite metadata files
created between `start_version` and `end_version`. `end_version` defaults to
the latest metadata.json of the table.
+
+#### Output
+
+| Output Name | Type | Description
|
+|----------------------|--------|-------------------------------------------------------------------|
+| `latest_version` | string | Name of the latest metadata file rewritten
by this procedure |
+| `file_list_location` | string | Path to a csv file containing a mapping of
source to target paths |
+
+##### File List Copy Plan
+The file list contains the copy plan for all files added to the table between
`startVersion` and `endVersion`.
+For each file, it specifies
+
+- Source Path:
+The original file path in the table, or the staging location if the file has
been rewritten.
+
+- Target Path:
+The path with the replacement prefix.
+
+The following example shows a copy plan for three files:
+
+```csv
+sourcepath/datafile1.parquet,targetpath/datafile1.parquet
+sourcepath/datafile2.parquet,targetpath/datafile2.parquet
+stagingpath/manifest.avro,targetpath/manifest.avro
+```
+
+#### Examples
+
+Full rewrite of a table's path from source location in HDFS to a target
location in S3 bucket of table `my_table`.
+This produces a new set of metadata using the s3a prefix in the default
staging location under table's metadata directory.
+
+```sql
+CALL catalog_name.system.rewrite_table_path(
+ table => 'db.my_table',
+ source_prefix => "hdfs://nn:8020/path/to/source_table",
+ target_prefix => "s3a://bucket/prefix/db.db/my_table"
+);
+```
+
+Incremental rewrite of a table's path from a source location to a target
location between metadata versions
+`v2.metadata.json` and `v20.metadata.json`, with files written to an explicit
staging location.
+
+```sql
+CALL catalog_name.system.rewrite_table_path(
+ table => 'db.my_table',
+ source_prefix => "s3a://bucketOne/prefix/db.db/my_table",
+ target_prefix => "s3a://bucketTwo/prefix/db.db/my_table",
+ start_version => "v2.metadata.json",
+ end_version => "v20.metadata.json",
+ staging_location => "s3a://bucketStaging/my_table"
+);
+```
+
+Once the rewrite is completed, third-party tools (
+eg.
[Distcp](https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html)) can
be used to copy the newly created
+metadata files and data files to the target location.
+
+Lastly, [register_table](#register_table) procedure can be used to register
the copied table in the target location with a catalog.
Review Comment:
Nit: put `the` before `register_table procedure`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]