This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/paimon.git


The following commit(s) were added to refs/heads/master by this push:
     new 0f1e1ebc48 [doc] Add missing `options` for related procedures (#5508)
0f1e1ebc48 is described below

commit 0f1e1ebc48c4e960d85e3a5748520a3f0abf9a65
Author: Yubin Li <[email protected]>
AuthorDate: Tue Apr 22 16:36:45 2025 +0800

    [doc] Add missing `options` for related procedures (#5508)
---
 docs/content/flink/procedures.md | 17 +++++++++++------
 docs/content/spark/procedures.md |  9 +++++++--
 2 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/docs/content/flink/procedures.md b/docs/content/flink/procedures.md
index ffacc3d717..1fa2399c79 100644
--- a/docs/content/flink/procedures.md
+++ b/docs/content/flink/procedures.md
@@ -85,7 +85,7 @@ All available procedures are listed below.
             <li>partitions(optional): partition filter.</li>
             <li>order_strategy(optional): 'order' or 'zorder' or 'hilbert' or 
'none'.</li>
             <li>order_by(optional): the columns need to be sort. Left empty if 
'order_strategy' is 'none'.</li>
-            <li>options(optional): additional dynamic options of the 
table.</li>
+            <li>options(optional): additional dynamic options of the table. It 
prioritizes higher than original `tableProp` and lower than `procedureArg`.</li>
             <li>where(optional): partition predicate(Can't be used together 
with "partitions"). Note: as where is a keyword,a pair of backticks need to add 
around like `where`.</li>
             <li>partition_idle_time(optional): this is used to do a full 
compaction for partition which had not received any new data for 
'partition_idle_time'. And only these partitions will be compacted. This 
argument can not be used with order compact.</li>
             <li>compact_strategy(optional): this determines how to pick files 
to be merged, the default is determined by the runtime execution mode. 'full' 
strategy only supports batch mode. All files will be selected for merging. 
'minor' strategy: Pick the set of files that need to be merged based on 
specified conditions.</li>
@@ -568,7 +568,8 @@ All available procedures are listed below.
             retain_max => 'retain_max', <br/>
             retain_min => 'retain_min', <br/>
             older_than => 'older_than', <br/>
-            max_deletes => 'max_deletes') <br/><br/>
+            max_deletes => 'max_deletes', <br/>
+            options => 'key1=value1,key2=value2') <br/><br/>
          -- Use indexed argument<br/>
          -- for Flink 1.18<br/>
          CALL [catalog.]sys.expire_snapshots(table, retain_max)<br/><br/>
@@ -582,6 +583,7 @@ All available procedures are listed below.
             <li>retain_min: the minimum number of completed snapshots to 
retain.</li>
             <li>order_than: timestamp before which snapshots will be 
removed.</li>
             <li>max_deletes: the maximum number of snapshots that can be 
deleted at once.</li>
+            <li>options: the additional dynamic options of the table. It 
prioritizes higher than original `tableProp` and lower than `procedureArg`.</li>
       </td>
       <td>
          -- for Flink 1.18<br/>
@@ -590,7 +592,7 @@ All available procedures are listed below.
          CALL sys.expire_snapshots(`table` => 'default.T', retain_max => 
2)<br/>
          CALL sys.expire_snapshots(`table` => 'default.T', older_than => 
'2024-01-01 12:00:00')<br/>
          CALL sys.expire_snapshots(`table` => 'default.T', older_than => 
'2024-01-01 12:00:00', retain_min => 10)<br/>
-         CALL sys.expire_snapshots(`table` => 'default.T', older_than => 
'2024-01-01 12:00:00', max_deletes => 10)<br/>
+         CALL sys.expire_snapshots(`table` => 'default.T', older_than => 
'2024-01-01 12:00:00', max_deletes => 10, options => 
'snapshot.expire.limit=1')<br/>
       </td>
    </tr>
    <tr>
@@ -635,7 +637,7 @@ All available procedures are listed below.
 <tr>
       <td>expire_partitions</td>
       <td>
-         CALL [catalog.]sys.expire_partitions(table, expiration_time, 
timestamp_formatter, expire_strategy)<br/><br/>
+         CALL [catalog.]sys.expire_partitions(table, expiration_time, 
timestamp_formatter, expire_strategy, options)<br/><br/>
       </td>
       <td>
          To expire partitions. Argument:
@@ -645,13 +647,14 @@ All available procedures are listed below.
             <li>timestamp_pattern: the pattern to get a timestamp from 
partitions.</li>
             <li>expire_strategy: specifies the expiration strategy for 
partition expiration, possible values: 'values-time' or 'update-time' , 
'values-time' as default.</li>
             <li>max_expires: The maximum of limited expired partitions, it is 
optional.</li>
+            <li>options: the additional dynamic options of the table. It 
prioritizes higher than original `tableProp` and lower than `procedureArg`.</li>
       </td>
       <td>
          -- for Flink 1.18<br/>
          CALL sys.expire_partitions('default.T', '1 d', 'yyyy-MM-dd', '$dt', 
'values-time')<br/><br/>
          -- for Flink 1.19 and later<br/>
          CALL sys.expire_partitions(`table` => 'default.T', expiration_time => 
'1 d', timestamp_formatter => 'yyyy-MM-dd', expire_strategy => 
'values-time')<br/>
-         CALL sys.expire_partitions(`table` => 'default.T', expiration_time => 
'1 d', timestamp_formatter => 'yyyy-MM-dd HH:mm', timestamp_pattern => '$dt 
$hm', expire_strategy => 'values-time')<br/><br/>
+         CALL sys.expire_partitions(`table` => 'default.T', expiration_time => 
'1 d', timestamp_formatter => 'yyyy-MM-dd HH:mm', timestamp_pattern => '$dt 
$hm', expire_strategy => 'values-time', options => 
'partition.expiration-max-num=2')<br/><br/>
       </td>
    </tr>
     <tr>
@@ -769,11 +772,13 @@ All available procedures are listed below.
    <tr>
       <td>compact_manifest</td>
       <td>
-         CALL [catalog.]sys.compact_manifest(`table` => 'identifier')
+         CALL [catalog.]sys.compact_manifest(`table` => 'identifier')<br/>
+         CALL [catalog.]sys.compact_manifest(`table` => 'identifier', 
'options' => 'key1=value1,key2=value2')
       </td>
       <td>
          To compact_manifest the manifests. Arguments:
             <li>table: the target table identifier. Cannot be empty.</li>
+            <li>options: the additional dynamic options of the table. It 
prioritizes higher than original `tableProp` and lower than `procedureArg`.</li>
       </td>
       <td>
          CALL sys.compact_manifest(`table` => 'default.T')
diff --git a/docs/content/spark/procedures.md b/docs/content/spark/procedures.md
index be667e67f2..715c0d7037 100644
--- a/docs/content/spark/procedures.md
+++ b/docs/content/spark/procedures.md
@@ -46,6 +46,7 @@ This section introduce all available spark procedures about 
paimon.
             <li>where: partition predicate. Left empty for all partitions. 
(Can't be used together with "partitions")</li>          
             <li>order_strategy: 'order' or 'zorder' or 'hilbert' or 'none'. 
Left empty for 'none'.</li>
             <li>order_columns: the columns need to be sort. Left empty if 
'order_strategy' is 'none'.</li>
+            <li>options: additional dynamic options of the table. It 
prioritizes higher than original `tableProp` and lower than `procedureArg`.</li>
             <li>partition_idle_time: this is used to do a full compaction for 
partition which had not received any new data for 'partition_idle_time'. And 
only these partitions will be compacted. This argument can not be used with 
order compact.</li>
             <li>compact_strategy: this determines how to pick files to be 
merged, the default is determined by the runtime execution mode. 'full' 
strategy only supports batch mode. All files will be selected for merging. 
'minor' strategy: Pick the set of files that need to be merged based on 
specified conditions.</li>
       </td>
@@ -53,6 +54,7 @@ This section introduce all available spark procedures about 
paimon.
          SET spark.sql.shuffle.partitions=10; --set the compact parallelism 
<br/><br/>
          CALL sys.compact(table => 'T', partitions => 'p=0;p=1',  
order_strategy => 'zorder', order_by => 'a,b') <br/><br/>
          CALL sys.compact(table => 'T', where => 'p>0 and p<3', order_strategy 
=> 'zorder', order_by => 'a,b') <br/><br/>
+         CALL sys.compact(table => 'T', where => 'dt>10 and h<20', 
order_strategy => 'zorder', order_by => 'a,b', options => 
'sink.parallelism=4')<br/><br/> 
          CALL sys.compact(table => 'T', partition_idle_time => '60s')<br/><br/>
          CALL sys.compact(table => 'T', compact_strategy => 'minor')<br/><br/>
       </td>
@@ -66,8 +68,9 @@ This section introduce all available spark procedures about 
paimon.
             <li>retain_min: the minimum number of completed snapshots to 
retain.</li>
             <li>older_than: timestamp before which snapshots will be 
removed.</li>
             <li>max_deletes: the maximum number of snapshots that can be 
deleted at once.</li>
+            <li>options: the additional dynamic options of the table. It 
prioritizes higher than original `tableProp` and lower than `procedureArg`.</li>
       </td>
-      <td>CALL sys.expire_snapshots(table => 'default.T', retain_max => 
10)</td>
+      <td>CALL sys.expire_snapshots(table => 'default.T', retain_max => 10, 
options => 'snapshot.expire.limit=1')</td>
     </tr>
     <tr>
       <td>expire_partitions</td>
@@ -79,9 +82,10 @@ This section introduce all available spark procedures about 
paimon.
             <li>timestamp_pattern: the pattern to get a timestamp from 
partitions.</li>
             <li>expire_strategy: specifies the expiration strategy for 
partition expiration, possible values: 'values-time' or 'update-time' , 
'values-time' as default.</li>
             <li>max_expires: The maximum of limited expired partitions, it is 
optional.</li>
+            <li>options: the additional dynamic options of the table. It 
prioritizes higher than original `tableProp` and lower than `procedureArg`.</li>
       </td>
       <td>CALL sys.expire_partitions(table => 'default.T', expiration_time => 
'1 d', timestamp_formatter => 
-'yyyy-MM-dd', timestamp_pattern => '$dt', expire_strategy => 
'values-time')</td>
+'yyyy-MM-dd', timestamp_pattern => '$dt', expire_strategy => 'values-time', 
options => 'partition.expiration-max-num=2')</td>
     </tr>
     <tr>
       <td>create_tag</td>
@@ -376,6 +380,7 @@ This section introduce all available spark procedures about 
paimon.
       <td>
          To compact_manifest the manifests. Arguments:
             <li>table: the target table identifier. Cannot be empty.</li>
+            <li>options: the additional dynamic options of the table. It 
prioritizes higher than original `tableProp` and lower than `procedureArg`.</li>
       </td>
       <td>
          CALL sys.compact_manifest(`table` => 'default.T')

Reply via email to