AHeise commented on a change in pull request #17182:
URL: https://github.com/apache/flink/pull/17182#discussion_r706176761



##########
File path: docs/content.zh/release-notes/flink-1.14.md
##########
@@ -0,0 +1,424 @@
+---
+title: "Release Notes - Flink 1.14"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Release Notes - Flink 1.14
+
+These release notes discuss important aspects, such as configuration, 
behavior, or dependencies,
+that changed between Flink 1.13 and Flink 1.14. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.14.
+
+### DataStream API
+
+#### Expose a consistent GlobalDataExchangeMode
+
+##### [FLINK-23402](https://issues.apache.org/jira/browse/FLINK-23402)
+
+The default DataStream API shuffle mode for batch executions has been changed 
to blocking exchanges
+for all edges of the stream graph. A new option `execution.batch-shuffle-mode` 
allows to change it
+to pipelined behavior if necessary.
+
+#### Allow @TypeInfo annotation on POJO field declarations
+
+##### [FLINK-12141](https://issues.apache.org/jira/browse/FLINK-12141)
+
+`@TypeInfo` annotations can now also be used on POJO fields which, for 
example, can help to define
+custom serializers for third-party classes that can otherwise not be annotated 
themselves.
+
+### Table & SQL
+
+#### Use pipeline name consistently across DataStream API and Table API
+
+##### [FLINK-23646](https://issues.apache.org/jira/browse/FLINK-23646)
+
+The default job name for DataStream API programs in batch mode has changed 
from `"Flink Streaming Job"` to
+`"Flink Batch Job"`. A custom name can be set with config option 
`pipeline.name`.
+
+#### Propagate unique keys for fromChangelogStream
+
+##### [FLINK-24033](https://issues.apache.org/jira/browse/FLINK-24033)
+
+Compared to 1.13.2, `StreamTableEnvironment.fromChangelogStream` might produce 
a different stream
+because primary keys were not properly considered before.
+
+#### Support new type inference for Table#flatMap
+
+##### [FLINK-16769](https://issues.apache.org/jira/browse/FLINK-16769)
+
+`Table.flatMap()` supports the new type system now. Users are requested to 
upgrade their functions.
+
+#### Add Scala implicit conversions for new API methods
+
+##### [FLINK-22590](https://issues.apache.org/jira/browse/FLINK-22590)
+
+The Scala implicits that convert between DataStream API and Table API have 
been updated to the new
+methods of FLIP-136.
+
+The changes might require an update of pipelines that used `toTable` or 
implicit conversions from
+`Table` to `DataStream[Row]`.
+
+#### Remove YAML environment file support in SQL Client
+
+##### [FLINK-22540](https://issues.apache.org/jira/browse/FLINK-22540)
+
+The sql-client-defaults.yaml YAML file was deprecated in 1.13 release and now 
it is totally removed
+in this release. As an alternative, you can use the `-i` startup option to 
execute an initialization SQL
+file to setup the SQL Client session. The initialization SQL file can use 
Flink DDLs to
+define available catalogs, table sources and sinks, user-defined functions, 
and other properties
+required for execution and deployment.
+
+See more: 
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sqlclient/#initialize-session-using-sql-files
+
+#### Remove the legacy planner code base
+
+##### [FLINK-22864](https://issues.apache.org/jira/browse/FLINK-22864)
+
+The old Table/SQL planner has been removed. BatchTableEnvironment and DataSet 
API interop with Table
+API are not supported anymore. Use the unified TableEnvironment for batch and 
stream processing with
+the new planner or the DataStream API in batch execution mode.
+
+Users are encouraged to update their pipelines. Otherwise Flink 1.13 is the 
last version that offers
+the old functionality.
+
+#### Remove "blink" suffix from table modules
+
+##### [FLINK-22879](https://issues.apache.org/jira/browse/FLINK-22879)
+
+The following Maven modules have been renamed:
+* flink-table-planner-blink -> flink-table-planner
+* flink-table-runtime-blink -> flink-table-runtime
+* flink-table-uber-blink ->flink-table-uber
+
+It might be required to update job JAR dependencies. Note that
+flink-table-planner and flink-table-uber used to contain the legacy planner 
before Flink 1.14 and
+now contain the only officially supported planner (i.e. previously known as 
'Blink' planner).
+
+#### Remove BatchTableEnvironment and related API classes
+
+##### [FLINK-22877](https://issues.apache.org/jira/browse/FLINK-22877)
+
+Due to the removal of BatchTableEnvironment, BatchTableSource and 
BatchTableSink have been removed
+as well. Use DynamicTableSource and DynamicTableSink instead. They support the 
old InputFormat and
+OutputFormat interfaces as runtime providers if necessary.
+
+#### Remove TableEnvironment#connect
+
+##### [FLINK-23063](https://issues.apache.org/jira/browse/FLINK-23063)
+
+The deprecated `TableEnvironment#connect()` method has been removed. Use the
+new `TableEnvironment#createTemporaryTable(String, TableDescriptor)` to create 
tables
+programatically. Please note that this method only supports sources and sinks 
that comply with
+FLIP-95. This is also indicated by the new property design 
`'connector'='kafka'` instead of `'connector.type'='kafka'`.
+
+#### Deprecate toAppendStream and toRetractStream
+
+##### [FLINK-23330](https://issues.apache.org/jira/browse/FLINK-23330)
+
+The outdated variants of 
`StreamTableEnvironment.{fromDataStream|toAppendStream|toRetractStream)`
+have been deprecated. Use the `(from|to)(Data|Changelog)Stream` alternatives 
introduced in 1.13.
+
+#### Remove old connectors and formats stack around descriptors
+
+##### [FLINK-23513](https://issues.apache.org/jira/browse/FLINK-23513)
+
+The legacy versions of the SQL Kafka connector and SQL Elasticsearch connector 
have been removed
+together with their corresponding legacy formats. DDL or descriptors that 
still use `'connector.type='` or
+`'format.type='` options need to be updated to the new connector and formats 
available via the `'connector='` option.
+
+#### Drop BatchTableSource/Sink HBaseTableSource/Sink and related classes
+
+##### [FLINK-22623](https://issues.apache.org/jira/browse/FLINK-22623)
+
+The HBaseTableSource/Sink and related classes including various 
HBaseInputFormats and
+HBaseSinkFunction have been removed. It is possible to read via Table & SQL 
API and convert the
+Table to DataStream API (or vice versa) if necessary. DataSet API is not 
supported anymore.
+
+#### Drop BatchTableSource ParquetTableSource and related classes
+
+##### [FLINK-22622](https://issues.apache.org/jira/browse/FLINK-22622)
+
+The ParquetTableSource and related classes including various 
ParquetInputFormats have been removed.
+Use the filesystem connector with a Parquet format as a replacement. It is 
possible to read via
+Table & SQL API and convert the Table to DataStream API if necessary. DataSet 
API is not supported
+anymore.
+
+#### Drop BatchTableSource OrcTableSource and related classes
+
+##### [FLINK-22620](https://issues.apache.org/jira/browse/FLINK-22620)
+
+The OrcTableSource and related classes (including OrcInputFormat) have been 
removed. Use the
+filesystem connector with an ORC format as a replacement. It is possible to 
read via Table & SQL API
+and convert the Table to DataStream API if necessary. DataSet API is not 
supported anymore.
+
+#### Drop usages of BatchTableEnvironment and old planner in Python
+
+##### [FLINK-22619](https://issues.apache.org/jira/browse/FLINK-22619)
+
+The Python API does not offer a dedicated BatchTableEnvironment anymore. 
Instead, users can switch
+to the unified TableEnvironment for both batch and stream processing. Only the 
Blink planner (the
+only remaining planner in 1.14) is supported.
+
+#### Migrate ModuleFactory to the new factory stack
+
+##### [FLINK-23720](https://issues.apache.org/jira/browse/FLINK-23720)
+
+The `LOAD/UNLOAD MODULE` architecture for table modules has been updated to 
the new factory stack of
+FLIP-95. Users of this feature should update their `ModuleFactory` 
implementations.
+
+#### Migrate Table API to new KafkaSink
+
+##### [FLINK-23639](https://issues.apache.org/jira/browse/FLINK-23639)
+
+Table API/SQL write to Kafka with the new KafkaSink.
+
+### Connectors
+
+#### Implement FLIP-179: Expose Standardized Operator Metrics
+
+##### [FLINK-23652](https://issues.apache.org/jira/browse/FLINK-23652)
+
+Connectors using unified Source and Sink interface expose certain standardized 
metrics
+automatically.
+
+#### Port KafkaSink to FLIP-143
+
+##### [FLINK-22902](https://issues.apache.org/jira/browse/FLINK-22902)
+
+KafkaSink supersedes FlinkKafkaProducer and provides efficient exactly-once 
and at-least-once
+writing with the new unified sink interface supporting both batch and 
streaming mode of DataStream
+API.

Review comment:
       ```suggestion
   `KafkaSink` supersedes `FlinkKafkaProducer` and provides efficient 
exactly-once and at-least-once writing with the new unified sink interface 
supporting both batch and streaming mode of DataStream API. To upgrade, please 
stop with savepoint. `KafkaSink` needs a user-configured and unique transaction 
prefix, such that transactions of different applications do not interfere with 
each other.
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to