This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/amoro.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 203928469 Regenerated Amoro latest docs page based on 
ab8a1963f2276a73b515a86d0184dc37363876c3 commit
203928469 is described below

commit 203928469748f994735e5a68665c66c5742f1bd0
Author: GitHub Actions <[email protected]>
AuthorDate: Tue Mar 31 11:31:36 2026 +0000

    Regenerated Amoro latest docs page based on 
ab8a1963f2276a73b515a86d0184dc37363876c3 commit
---
 output/docs/latest/configurations/index.html        |  2 +-
 output/docs/latest/deployment/index.html            |  4 ++--
 output/docs/latest/flink-cdc-ingestion/index.html   |  2 +-
 output/docs/latest/flink-datastream/index.html      |  2 +-
 output/docs/latest/flink-dml/index.html             |  6 +++---
 output/docs/latest/flink-getting-started/index.html | 16 ++++++++--------
 output/docs/latest/flink-using-logstore/index.html  |  8 --------
 output/docs/latest/index.html                       |  2 +-
 output/docs/latest/search.json                      |  2 +-
 9 files changed, 18 insertions(+), 26 deletions(-)

diff --git a/output/docs/latest/configurations/index.html 
b/output/docs/latest/configurations/index.html
index f370fa316..5cc5fb217 100644
--- a/output/docs/latest/configurations/index.html
+++ b/output/docs/latest/configurations/index.html
@@ -1065,7 +1065,7 @@ soon.</p>
       <tr>
           <td>properties.XXX</td>
           <td>NULL</td>
-          <td>Other configurations of LogStore. <br><br>For Kafka, all the 
configurations supported by Kafka Consumer/Producer can be set by prefixing 
them with <code>properties.</code>,<br>such as 
<code>'properties.batch.size'='16384'</code>,<br>refer to <a 
href="https://kafka.apache.org/documentation/#consumerconfigs";>Kafka Consumer 
Configurations</a>, <a 
href="https://kafka.apache.org/documentation/#producerconfigs";>Kafka Producer 
Configurations</a> for more details.<br><br> For Pulsa [...]
+          <td>Other configurations of LogStore. <br><br>For Kafka, all the 
configurations supported by Kafka Consumer/Producer can be set by prefixing 
them with <code>properties.</code>,<br>such as 
<code>'properties.batch.size'='16384'</code>,<br>refer to <a 
href="https://kafka.apache.org/documentation/#consumerconfigs";>Kafka Consumer 
Configurations</a>, <a 
href="https://kafka.apache.org/documentation/#producerconfigs";>Kafka Producer 
Configurations</a> for more details.<br><br> For Pulsa [...]
       </tr>
   </tbody>
 </table>
diff --git a/output/docs/latest/deployment/index.html 
b/output/docs/latest/deployment/index.html
index 3dea4d32f..efe66d21c 100644
--- a/output/docs/latest/deployment/index.html
+++ b/output/docs/latest/deployment/index.html
@@ -700,9 +700,9 @@ Unzip it to create the amoro-x.y.z directory in the same 
directory, and then go
 </span></span><span style="display:flex;"><span>$ ls
 </span></span><span style="display:flex;"><span>amoro-x.y.z-bin.zip <span 
style="color:#75715e"># AMS release package</span>
 </span></span><span style="display:flex;"><span>
-</span></span><span style="display:flex;"><span>$ cd <span 
style="color:#e6db74">${</span>base_dir<span 
style="color:#e6db74">}</span>/amoro-format-mixed/amoro-mixed-flink/v1.17/amoro-mixed-flink-runtime-1.17/target
+</span></span><span style="display:flex;"><span>$ cd <span 
style="color:#e6db74">${</span>base_dir<span 
style="color:#e6db74">}</span>/amoro-format-mixed/amoro-mixed-flink/v1.18/amoro-mixed-flink-runtime-1.18/target
 </span></span><span style="display:flex;"><span>$ ls
-</span></span><span 
style="display:flex;"><span>amoro-format-mixed-flink-runtime-1.17-x.y.z.jar 
<span style="color:#75715e"># Flink 1.17 runtime package</span>
+</span></span><span 
style="display:flex;"><span>amoro-format-mixed-flink-runtime-1.18-x.y.z.jar 
<span style="color:#75715e"># Flink 1.18 runtime package</span>
 </span></span><span style="display:flex;"><span>
 </span></span><span style="display:flex;"><span>$ cd <span 
style="color:#e6db74">${</span>base_dir<span 
style="color:#e6db74">}</span>/amoro-format-mixed/amoro-mixed-spark/v3.3/amoro-mixed-spark-runtime-3.3/target
 </span></span><span style="display:flex;"><span>$ ls
diff --git a/output/docs/latest/flink-cdc-ingestion/index.html 
b/output/docs/latest/flink-cdc-ingestion/index.html
index 2943c559e..f44074808 100644
--- a/output/docs/latest/flink-cdc-ingestion/index.html
+++ b/output/docs/latest/flink-cdc-ingestion/index.html
@@ -974,7 +974,7 @@ Maven project&rsquo;s pom.xml file.</p>
 </span></span></code></pre></div><h3 id="mixed-iceberg-format-1">Mixed-Iceberg 
format</h3>
 <p>The following example will show how to write CDC data from multiple MySQL 
tables into the corresponding Mixed-Iceberg table.</p>
 <p><strong>Requirements</strong></p>
-<p>Please add <a 
href="https://mvnrepository.com/artifact/org.apache.flink/flink-connector-mysql-cdc/3.1.1";>Flink
 Connector MySQL CDC</a> and <a 
href="https://mvnrepository.com/artifact/org.apache.amoro/amoro-format-mixed-flink-1.17/0.7.0-incubating";>Amoro</a>
 dependencies to your Maven project&rsquo;s pom.xml file.</p>
+<p>Please add <a 
href="https://mvnrepository.com/artifact/org.apache.flink/flink-connector-mysql-cdc/3.1.1";>Flink
 Connector MySQL CDC</a> and <a 
href="https://mvnrepository.com/artifact/org.apache.amoro/amoro-format-mixed-flink-1.18/0.9.0-incubating";>Amoro</a>
 dependencies to your Maven project&rsquo;s pom.xml file.</p>
 <div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code
 class="language-java" data-lang="java"><span style="display:flex;"><span><span 
style="color:#f92672">import</span> 
org.apache.amoro.flink.InternalCatalogBuilder;
 </span></span><span style="display:flex;"><span><span 
style="color:#f92672">import</span> 
org.apache.amoro.flink.table.MixedFormatTableLoader;
 </span></span><span style="display:flex;"><span><span 
style="color:#f92672">import</span> 
org.apache.amoro.flink.util.MixedFormatUtils;
diff --git a/output/docs/latest/flink-datastream/index.html 
b/output/docs/latest/flink-datastream/index.html
index 3eb6c357c..54e569b24 100644
--- a/output/docs/latest/flink-datastream/index.html
+++ b/output/docs/latest/flink-datastream/index.html
@@ -685,7 +685,7 @@
 </span></span><span style="display:flex;"><span>  ...
 </span></span><span style="display:flex;"><span>  <span 
style="color:#f92672">&lt;dependency&gt;</span>
 </span></span><span style="display:flex;"><span>    <span 
style="color:#f92672">&lt;groupId&gt;</span>org.apache.amoro<span 
style="color:#f92672">&lt;/groupId&gt;</span>
-</span></span><span style="display:flex;"><span>    <span 
style="color:#75715e">&lt;!-- For example: 
amoro-format-mixed-flink-runtime-1.17 --&gt;</span>
+</span></span><span style="display:flex;"><span>    <span 
style="color:#75715e">&lt;!-- For example: 
amoro-format-mixed-flink-runtime-1.18 --&gt;</span>
 </span></span><span style="display:flex;"><span>    <span 
style="color:#f92672">&lt;artifactId&gt;</span>amoro-format-mixed-flink-runtime-${flink.minor-version}<span
 style="color:#f92672">&lt;/artifactId&gt;</span>
 </span></span><span style="display:flex;"><span>    <span 
style="color:#75715e">&lt;!-- For example: 0.7.0-incubating --&gt;</span>
 </span></span><span style="display:flex;"><span>    <span 
style="color:#f92672">&lt;version&gt;</span>${amoro-format-mixed-flink.version}<span
 style="color:#f92672">&lt;/version&gt;</span>
diff --git a/output/docs/latest/flink-dml/index.html 
b/output/docs/latest/flink-dml/index.html
index 3cd646faa..4e872c2cf 100644
--- a/output/docs/latest/flink-dml/index.html
+++ b/output/docs/latest/flink-dml/index.html
@@ -846,7 +846,7 @@
           <td>(none)</td>
           <td>String</td>
           <td>No</td>
-          <td>Parameters for Logstore: <br>For Logstore with Kafka 
(&rsquo;log-store.type&rsquo;=&lsquo;kafka&rsquo; default value), all other 
parameters supported by the Kafka Consumer can be set by prefixing properties. 
to the parameter name, for example, 
&lsquo;properties.batch.size&rsquo;=&lsquo;16384&rsquo;. The complete parameter 
information can be found in the <a 
href="https://kafka.apache.org/documentation/#consumerconfigs";>Kafka official 
documentation</a>; <br>For LogStore set t [...]
+          <td>Parameters for Logstore: <br>For Logstore with Kafka 
(&rsquo;log-store.type&rsquo;=&lsquo;kafka&rsquo; default value), all other 
parameters supported by the Kafka Consumer can be set by prefixing properties. 
to the parameter name, for example, 
&lsquo;properties.batch.size&rsquo;=&lsquo;16384&rsquo;. The complete parameter 
information can be found in the <a 
href="https://kafka.apache.org/documentation/#consumerconfigs";>Kafka official 
documentation</a>; <br>For LogStore set t [...]
       </tr>
       <tr>
           <td>log.consumer.changelog.modes</td>
@@ -861,7 +861,7 @@
 <p><strong>Notes</strong></p>
 <ul>
 <li>When log-store.type = pulsar, the parallelism of the Flink task cannot be 
less than the number of partitions in the Pulsar topic, otherwise some 
partition data cannot be read.</li>
-<li>When the number of topic partitions in log-store is less than the 
parallelism of the Flink task, some Flink subtasks will be idle. At this time, 
if the task has a watermark, the parameter table.exec.source.idle-timeout must 
be configured, otherwise the watermark will not advance. See <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/config/#table-exec-source-idle-timeout";>official
 documentation</a> for details.</li>
+<li>When the number of topic partitions in log-store is less than the 
parallelism of the Flink task, some Flink subtasks will be idle. At this time, 
if the task has a watermark, the parameter table.exec.source.idle-timeout must 
be configured, otherwise the watermark will not advance. See <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/config/#table-exec-source-idle-timeout";>official
 documentation</a> for details.</li>
 </ul>
 </blockquote>
 <h3 id="streaming-mode-filestore-non-primary-key-table">Streaming mode 
(FileStore non-primary key table)</h3>
@@ -1064,7 +1064,7 @@
           <td>(none)</td>
           <td>String</td>
           <td>No</td>
-          <td>Parameters for Logstore: For Logstore with Kafka 
(&rsquo;log-store.type&rsquo;=&lsquo;kafka&rsquo; default value), all other 
parameters supported by the Kafka Consumer can be set by prefixing properties. 
to the parameter name, for example, 
&lsquo;properties.batch.size&rsquo;=&lsquo;16384&rsquo;. The complete parameter 
information can be found in the <a 
href="https://kafka.apache.org/documentation/#consumerconfigs";>Kafka official 
documentation</a>; For LogStore set to Pulsar [...]
+          <td>Parameters for Logstore: For Logstore with Kafka 
(&rsquo;log-store.type&rsquo;=&lsquo;kafka&rsquo; default value), all other 
parameters supported by the Kafka Consumer can be set by prefixing properties. 
to the parameter name, for example, 
&lsquo;properties.batch.size&rsquo;=&lsquo;16384&rsquo;. The complete parameter 
information can be found in the <a 
href="https://kafka.apache.org/documentation/#consumerconfigs";>Kafka official 
documentation</a>; For LogStore set to Pulsar [...]
       </tr>
       <tr>
           <td>other table parameters</td>
diff --git a/output/docs/latest/flink-getting-started/index.html 
b/output/docs/latest/flink-getting-started/index.html
index 256c1af95..f9429e658 100644
--- a/output/docs/latest/flink-getting-started/index.html
+++ b/output/docs/latest/flink-getting-started/index.html
@@ -700,14 +700,14 @@ for more information.</p>
 <li><code>Flink Lookup Join</code> performs real-time read of Amoro table data 
for association calculation through Apache Flink Temporal Join grammar.</li>
 </ul>
 <p>The Amoro project can be self-compiled to obtain the runtime jar.</p>
-<p><code>./mvnw clean package -pl ':amoro-mixed-flink-runtime-1.17' -am 
-DskipTests</code></p>
-<p>The Flink Runtime Jar is located in the 
<code>amoro-format-mixed/amoro-mixed-flink/v1.17/amoro-mixed-flink-runtime-1.17/target</code>
 directory.</p>
+<p><code>./mvnw clean package -pl ':amoro-mixed-flink-runtime-1.18' -am 
-DskipTests</code></p>
+<p>The Flink Runtime Jar is located in the 
<code>amoro-format-mixed/amoro-mixed-flink/v1.18/amoro-mixed-flink-runtime-1.18/target</code>
 directory.</p>
 <h2 id="environment-preparation">Environment preparation</h2>
-<p>Download Flink and related dependencies, and download Flink 1.17/1.18/1.19 
as needed. Taking Flink 1.17 as an example:</p>
+<p>Download Flink and related dependencies, and download Flink 1.18/1.19 as 
needed. Taking Flink 1.18 as an example:</p>
 <div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code
 class="language-shell" data-lang="shell"><span 
style="display:flex;"><span><span style="color:#75715e"># Replace version value 
with the latest Amoro version if needed</span>
-</span></span><span style="display:flex;"><span>AMORO_VERSION<span 
style="color:#f92672">=</span>0.8.0-incubating
-</span></span><span style="display:flex;"><span>FLINK_VERSION<span 
style="color:#f92672">=</span>1.17.2
-</span></span><span style="display:flex;"><span>FLINK_MAJOR_VERSION<span 
style="color:#f92672">=</span>1.17
+</span></span><span style="display:flex;"><span>AMORO_VERSION<span 
style="color:#f92672">=</span>0.9.0-incubating
+</span></span><span style="display:flex;"><span>FLINK_VERSION<span 
style="color:#f92672">=</span>1.18.1
+</span></span><span style="display:flex;"><span>FLINK_MAJOR_VERSION<span 
style="color:#f92672">=</span>1.18
 </span></span><span 
style="display:flex;"><span>FLINK_HADOOP_SHADE_VERSION<span 
style="color:#f92672">=</span>2.7.5
 </span></span><span style="display:flex;"><span>APACHE_FLINK_URL<span 
style="color:#f92672">=</span>archive.apache.org/dist/flink
 </span></span><span style="display:flex;"><span>MAVEN_URL<span 
style="color:#f92672">=</span>https://repo1.maven.org/maven2
@@ -729,7 +729,7 @@ for more information.</p>
 </span></span><span style="display:flex;"><span>mv 
flink-shaded-hadoop-2-uber-<span 
style="color:#e6db74">${</span>FLINK_HADOOP_SHADE_VERSION<span 
style="color:#e6db74">}</span>-10.0.jar lib
 </span></span><span style="display:flex;"><span>mv 
amoro-mixed-format-flink-runtime-<span 
style="color:#e6db74">${</span>FLINK_MAJOR_VERSION<span 
style="color:#e6db74">}</span>-<span 
style="color:#e6db74">${</span>AMORO_VERSION<span 
style="color:#e6db74">}</span>.jar lib
 </span></span></code></pre></div><p>Modify Flink related configuration 
files:</p>
-<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>cd 
flink-1.17.2
+<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>cd 
flink-1.18.1
 </span></span><span style="display:flex;"><span>vim conf/flink-conf.yaml
 </span></span></code></pre></div><p>Modify the following settings:</p>
 <div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code
 class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span 
style="color:#75715e"># Increase the number of slots to run two streaming tasks 
simultaneously</span>
@@ -753,7 +753,7 @@ for more information.</p>
 <p>You need to enable Flink checkpoint and modify the <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.12/deployment/config.html#execution-checkpointing-interval";>Flink
 checkpoint configuration</a> in Flink conf. The data will only be committed 
during checkpoint.</p>
 <p><strong>2. When using Flink SQL-Client to read Amoro tables with 
write.upsert feature enabled, there are still duplicate primary key 
data</strong></p>
 <p>The query results obtained through Flink SQL-Client cannot provide MOR 
semantics based on primary keys. If you need to obtain merged results through 
Flink engine queries, you can write the content of Amoro tables to a MySQL 
table through JDBC connector for viewing.</p>
-<p><strong>3. When writing to Amoro tables with write.upsert feature enabled 
through SQL-Client under Flink 1.17, there are still duplicate primary key 
data</strong></p>
+<p><strong>3. When writing to Amoro tables with write.upsert feature enabled 
through SQL-Client under Flink 1.18, there are still duplicate primary key 
data</strong></p>
 <p>You need to execute the command <code>set 
table.exec.sink.upsert-materialize = none</code> in SQL-Client to turn off the 
upsert materialize operator generated upsert view. This operator will affect 
the AmoroWriter&rsquo;s generation of delete data when the write.upsert feature 
is enabled, causing duplicate primary key data to not be merged.</p>
 </div>
                 
diff --git a/output/docs/latest/flink-using-logstore/index.html 
b/output/docs/latest/flink-using-logstore/index.html
index 3d382d404..bb0f5f7d8 100644
--- a/output/docs/latest/flink-using-logstore/index.html
+++ b/output/docs/latest/flink-using-logstore/index.html
@@ -695,10 +695,6 @@
       </tr>
   </thead>
   <tbody>
-      <tr>
-          <td>Flink 1.17</td>
-          <td>✔</td>
-      </tr>
       <tr>
           <td>Flink 1.18</td>
           <td>✔</td>
@@ -718,10 +714,6 @@
       </tr>
   </thead>
   <tbody>
-      <tr>
-          <td>1.17.x</td>
-          <td>0.10.2.*<br> 0.11.*<br> 1.*<br> 2.*<br> 3.*</td>
-      </tr>
       <tr>
           <td>1.18.x</td>
           <td>0.10.2.*<br> 0.11.*<br> 1.*<br> 2.*<br> 3.*</td>
diff --git a/output/docs/latest/index.html b/output/docs/latest/index.html
index 293e5252d..2e6d8f7c0 100644
--- a/output/docs/latest/index.html
+++ b/output/docs/latest/index.html
@@ -737,7 +737,7 @@ For details, please refer to: <a 
href="https://paimon.apache.org/docs/master/";>P
   <tbody>
       <tr>
           <td>Flink</td>
-          <td>1.17.x, 1.18.x, 1.19.x</td>
+          <td>1.18.x, 1.19.x</td>
           <td>✔</td>
           <td>✔</td>
           <td>✖</td>
diff --git a/output/docs/latest/search.json b/output/docs/latest/search.json
index 085b2f4f5..5378c2317 100644
--- a/output/docs/latest/search.json
+++ b/output/docs/latest/search.json
@@ -1 +1 @@
-[{"categories":null,"content":"Amoro Docs This directory contains the 
documentation content of Amoro.\nThe documentation is written in Markdown 
format. The images referenced in the documentation are saved in the images 
folder. Style Proper nouns should start with a capital letter, like 
Hadoop、Hive、Iceberg、Amoro 
","description":"","title":"","uri":"https://amoro.apache.org/docs/latest/readme/"},{"categories":null,"content":";
 AMS Configuration Amoro Management Service Configuration The con [...]
\ No newline at end of file
+[{"categories":null,"content":"Amoro Docs This directory contains the 
documentation content of Amoro.\nThe documentation is written in Markdown 
format. The images referenced in the documentation are saved in the images 
folder. Style Proper nouns should start with a capital letter, like 
Hadoop、Hive、Iceberg、Amoro 
","description":"","title":"","uri":"https://amoro.apache.org/docs/latest/readme/"},{"categories":null,"content":";
 AMS Configuration Amoro Management Service Configuration The con [...]
\ No newline at end of file

Reply via email to