This is an automated email from the ASF dual-hosted git repository.

hxb pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new ec517ac85 [hotfix] Fix the format of the 1.16-announcement
ec517ac85 is described below

commit ec517ac8573639d2f40777e241b4a5c84e149bb8
Author: huangxingbo <h...@apache.org>
AuthorDate: Fri Oct 28 18:18:24 2022 +0800

    [hotfix] Fix the format of the 1.16-announcement
---
 _posts/2022-10-28-1.16-announcement.md         |   5 ++
 content/news/2022/10/28/1.16-announcement.html | 104 +++++++++++++++----------
 2 files changed, 66 insertions(+), 43 deletions(-)

diff --git a/_posts/2022-10-28-1.16-announcement.md 
b/_posts/2022-10-28-1.16-announcement.md
index eff11a596..1d3b5c719 100644
--- a/_posts/2022-10-28-1.16-announcement.md
+++ b/_posts/2022-10-28-1.16-announcement.md
@@ -156,6 +156,7 @@ to display multiple concurrent attempts of tasks and 
blocked task managers.
 We have introduced a new [Hybrid 
Shuffle](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/ops/batch/batch_shuffle)
 
 Mode for batch executions. It combines the advantages of blocking shuffle and 
 pipelined shuffle (in streaming mode).
+
  - Like blocking shuffle, it does not require upstream and downstream tasks to 
run simultaneously, 
    which allows executing a job with little resources.
  - Like pipelined shuffle, it does not require downstream tasks to be executed 
@@ -200,6 +201,7 @@ so that stream computing can continue to lead.
 Changelog state backend aims at making checkpoint intervals shorter and more 
predictable, 
 this release is prod-ready and is dedicated to adapting changelog state 
backend to 
 the existing state backends and improving the usability of changelog state 
backend:
+
  - Support state migration
  - Support local recovery
  - Introduce file cache to optimize restoring
@@ -285,6 +287,7 @@ in [the 
documentation](https://nightlies.apache.org/flink/flink-docs-release-1.1
 ## Enhanced Lookup Join
 
 Lookup join is widely used in stream processing, and we have introduced 
several improvements:
+
 - Adds a unified abstraction for lookup source cache and 
   [related 
metrics](https://cwiki.apache.org/confluence/display/FLINK/FLIP-221%3A+Abstraction+for+lookup+source+cache+and+metric)
 
   to speed up lookup queries
@@ -324,6 +327,7 @@ kinds of Flink jobs using Python language smoothly.
 ## New SQL Syntax
 
 In 1.16, we extend more DDL syntaxes which could help users to better use SQL:
+
 - [USING 
JAR](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/sql/create/#create-function)
 
   supports dynamic loading of UDF jar to help platform developers to easily 
manage UDF.
 - [CREATE TABLE AS 
SELECT](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/sql/create/#as-select_statement)
 (CTAS) 
@@ -343,6 +347,7 @@ This feature is very useful for ML and interactive 
programming in Python.
 ## History Server & Completed Jobs Information Enhancement
 
 We have enhanced the experiences of viewing completed jobs’ information in 
this release.
+
 - JobManager / HistoryServer WebUI now provides detailed execution time 
metrics, 
   including duration tasks spent in each execution state and the accumulated 
   busy / idle / back-pressured time during running.
diff --git a/content/news/2022/10/28/1.16-announcement.html 
b/content/news/2022/10/28/1.16-announcement.html
index a671c833a..aac16c5e1 100644
--- a/content/news/2022/10/28/1.16-announcement.html
+++ b/content/news/2022/10/28/1.16-announcement.html
@@ -394,14 +394,17 @@ to display multiple concurrent attempts of tasks and 
blocked task managers.</p>
 
 <p>We have introduced a new <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/ops/batch/batch_shuffle";>Hybrid
 Shuffle</a> 
 Mode for batch executions. It combines the advantages of blocking shuffle and 
-pipelined shuffle (in streaming mode).
- - Like blocking shuffle, it does not require upstream and downstream tasks to 
run simultaneously, 
-   which allows executing a job with little resources.
- - Like pipelined shuffle, it does not require downstream tasks to be executed 
-   after upstream tasks finish, which reduces the overall execution time of 
the job when 
-   given sufficient resources.
- - It adapts to custom preferences between persisting less data and restarting 
less tasks on failures, 
-   by providing different spilling strategies.</p>
+pipelined shuffle (in streaming mode).</p>
+
+<ul>
+  <li>Like blocking shuffle, it does not require upstream and downstream tasks 
to run simultaneously, 
+which allows executing a job with little resources.</li>
+  <li>Like pipelined shuffle, it does not require downstream tasks to be 
executed 
+after upstream tasks finish, which reduces the overall execution time of the 
job when 
+given sufficient resources.</li>
+  <li>It adapts to custom preferences between persisting less data and 
restarting less tasks on failures, 
+by providing different spilling strategies.</li>
+</ul>
 
 <p>Note: This feature is experimental and by default not activated.</p>
 
@@ -438,14 +441,20 @@ so that stream computing can continue to lead.</p>
 
 <p>Changelog state backend aims at making checkpoint intervals shorter and 
more predictable, 
 this release is prod-ready and is dedicated to adapting changelog state 
backend to 
-the existing state backends and improving the usability of changelog state 
backend:
- - Support state migration
- - Support local recovery
- - Introduce file cache to optimize restoring
- - Support switch based on checkpoint
- - Improve the monitoring experience of changelog state backend
-   - expose changelog’s metrics
-   - expose changelog’s configuration to webUI</p>
+the existing state backends and improving the usability of changelog state 
backend:</p>
+
+<ul>
+  <li>Support state migration</li>
+  <li>Support local recovery</li>
+  <li>Introduce file cache to optimize restoring</li>
+  <li>Support switch based on checkpoint</li>
+  <li>Improve the monitoring experience of changelog state backend
+    <ul>
+      <li>expose changelog’s metrics</li>
+      <li>expose changelog’s configuration to webUI</li>
+    </ul>
+  </li>
+</ul>
 
 <p>Table 1: The comparison between Changelog Enabled / Changelog Disabled on 
value state 
 (see <a 
href="https://flink.apache.org/2022/05/30/changelog-state-backend.html";>this 
blog</a> for more details)</p>
@@ -553,16 +562,19 @@ in <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/
 
 <h2 id="enhanced-lookup-join">Enhanced Lookup Join</h2>
 
-<p>Lookup join is widely used in stream processing, and we have introduced 
several improvements:
-- Adds a unified abstraction for lookup source cache and 
-  <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-221%3A+Abstraction+for+lookup+source+cache+and+metric";>related
 metrics</a> 
-  to speed up lookup queries
-- Introduces the configurable asynchronous mode (ALLOW_UNORDERED) via 
-  <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/config/#table-exec-async-lookup-output-mode";>job
 configuration</a> 
-  or <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/sql/queries/hints/#lookup";>lookup
 hint</a> 
-  to significantly improve query throughput without compromising correctness.
-- <a 
href="https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/hints/#3-enable-delayed-retry-strategy-for-lookup";>Retryable
 lookup mechanism</a> 
-  gives users more tools to solve the delayed updates issue in external 
systems.</p>
+<p>Lookup join is widely used in stream processing, and we have introduced 
several improvements:</p>
+
+<ul>
+  <li>Adds a unified abstraction for lookup source cache and 
+<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-221%3A+Abstraction+for+lookup+source+cache+and+metric";>related
 metrics</a> 
+to speed up lookup queries</li>
+  <li>Introduces the configurable asynchronous mode (ALLOW_UNORDERED) via 
+<a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/config/#table-exec-async-lookup-output-mode";>job
 configuration</a> 
+or <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/sql/queries/hints/#lookup";>lookup
 hint</a> 
+to significantly improve query throughput without compromising 
correctness.</li>
+  <li><a 
href="https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/hints/#3-enable-delayed-retry-strategy-for-lookup";>Retryable
 lookup mechanism</a> 
+gives users more tools to solve the delayed updates issue in external 
systems.</li>
+</ul>
 
 <h2 id="retry-support-for-async-io">Retry Support For Async I/O</h2>
 
@@ -592,14 +604,17 @@ kinds of Flink jobs using Python language smoothly.</p>
 
 <h2 id="new-sql-syntax">New SQL Syntax</h2>
 
-<p>In 1.16, we extend more DDL syntaxes which could help users to better use 
SQL:
-- <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/sql/create/#create-function";>USING
 JAR</a> 
-  supports dynamic loading of UDF jar to help platform developers to easily 
manage UDF.
-- <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/sql/create/#as-select_statement";>CREATE
 TABLE AS SELECT</a> (CTAS) 
-  supports users to create new tables based on existing tables and queries.
-- <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/dev/table/sql/analyze";>ANALYZE
 TABLE</a> 
-  supports users to manually generate table statistics so that the optimizer 
could 
-  generate better execution plans.</p>
+<p>In 1.16, we extend more DDL syntaxes which could help users to better use 
SQL:</p>
+
+<ul>
+  <li><a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/sql/create/#create-function";>USING
 JAR</a> 
+supports dynamic loading of UDF jar to help platform developers to easily 
manage UDF.</li>
+  <li><a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/sql/create/#as-select_statement";>CREATE
 TABLE AS SELECT</a> (CTAS) 
+supports users to create new tables based on existing tables and queries.</li>
+  <li><a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/dev/table/sql/analyze";>ANALYZE
 TABLE</a> 
+supports users to manually generate table statistics so that the optimizer 
could 
+generate better execution plans.</li>
+</ul>
 
 <h2 id="cache-in-datastream-for-interactive-programming">Cache in DataStream 
for Interactive Programming</h2>
 
@@ -611,15 +626,18 @@ This feature is very useful for ML and interactive 
programming in Python.</p>
 
 <h2 id="history-server--completed-jobs-information-enhancement">History Server 
&amp; Completed Jobs Information Enhancement</h2>
 
-<p>We have enhanced the experiences of viewing completed jobs’ information in 
this release.
-- JobManager / HistoryServer WebUI now provides detailed execution time 
metrics, 
-  including duration tasks spent in each execution state and the accumulated 
-  busy / idle / back-pressured time during running.
-- JobManager / HistoryServer WebUI now provides aggregation of major SubTask 
metrics, 
-  grouped by Task or TaskManager.
-- JobManager / HistoryServer WebUI now provides more environmental 
information, 
-  including environment variables, JVM options and classpath.
-- HistoryServer now supports browsing logs[6] from external log archiving 
services.</p>
+<p>We have enhanced the experiences of viewing completed jobs’ information in 
this release.</p>
+
+<ul>
+  <li>JobManager / HistoryServer WebUI now provides detailed execution time 
metrics, 
+including duration tasks spent in each execution state and the accumulated 
+busy / idle / back-pressured time during running.</li>
+  <li>JobManager / HistoryServer WebUI now provides aggregation of major 
SubTask metrics, 
+grouped by Task or TaskManager.</li>
+  <li>JobManager / HistoryServer WebUI now provides more environmental 
information, 
+including environment variables, JVM options and classpath.</li>
+  <li>HistoryServer now supports browsing logs[6] from external log archiving 
services.</li>
+</ul>
 
 <h2 id="protobuf-format">Protobuf format</h2>
 

Reply via email to