gaoyunhaii commented on code in PR #19584: URL: https://github.com/apache/flink/pull/19584#discussion_r861617764
########## docs/content/release-notes/flink-1.15.md: ########## @@ -0,0 +1,597 @@ +--- +title: "Release Notes - Flink 1.15" +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + +# Release notes - Flink 1.15 + +These release notes discuss important aspects, such as configuration, behavior, +or dependencies, that changed between Flink 1.14 and Flink 1.15. Please read these +notes carefully if you are planning to upgrade your Flink version to 1.15. + +## Summary of changed dependency names + +There are Several changes in Flink 1.15 that require updating dependency names when +upgrading from earlier versions, mainly including the effort to opting-out Scala dependencies +from non-scala modules and reorganize table modules. A quick checklist of the dependency changes +is as follows: + +* Any dependency to one of the following modules needs to be updated to no longer include a suffix: + + ``` + flink-cep + flink-clients + flink-connector-elasticsearch-base + flink-connector-elasticsearch6 + flink-connector-elasticsearch7 + flink-connector-gcp-pubsub + flink-connector-hbase-1.4 + flink-connector-hbase-2.2 + flink-connector-hbase-base + flink-connector-jdbc + flink-connector-kafka + flink-connector-kinesis + flink-connector-nifi + flink-connector-pulsar + flink-connector-rabbitmq + flink-container + flink-dstl-dfs + flink-gelly + flink-hadoop-bulk + flink-kubernetes + flink-runtime-web + flink-sql-connector-elasticsearch6 + flink-sql-connector-elasticsearch7 + flink-sql-connector-hbase-1.4 + flink-sql-connector-hbase-2.2 + flink-sql-connector-kafka + flink-sql-connector-kinesis + flink-sql-connector-rabbitmq + flink-state-processor-api + flink-statebackend-rocksdb + flink-streaming-java + flink-test-utils + flink-yarn + flink-table-api-java-bridge + flink-table-runtime + flink-sql-client + flink-orc + flink-orc-nohive + flink-parquet + ``` +* For Table / SQL users, the new module `flink-table-planner-loader` replaces `flink-table-planner_2.12` + and avoids the need for a Scala suffix. As a consequence, `flink-table-uber` has been split into `flink-table-api-java-uber`, + `flink-table-planner(-loader)`, and `flink-table-runtime`. Scala users need to explicitly add a dependency + to `flink-table-api-scala` or `flink-table-api-scala-bridge`. For backwards compatibility, users can still + swap it with `flink-table-planner_2.12` located in `opt/`. + +The detail of the involved issues are listed as follows. + +#### Add support for opting-out of Scala + +##### [FLINK-20845](https://issues.apache.org/jira/browse/FLINK-20845) + +The Java DataSet/-Stream APIs are now independent of Scala and no longer transitively depend on it. + +The implications are the following: + +* If you only intend to use the Java APIs, with Java types, +then you can opt-in to a Scala-free Flink by removing the `flink-scala` jar from the `lib/` directory of the distribution. +You are then free to use any Scala version and Scala libraries. +You can either bundle Scala itself in your user-jar; or put into the `lib/` directory of the distribution. + +* If you relied on the Scala APIs, without an explicit dependency on them, + then you may experience issues when building your projects. You can solve this by adding explicit dependencies to + the APIs that you are using. This should primarily affect users of the Scala `DataStream/CEP` APIs. + +* A lot of modules have lost their Scala suffix. + Further caution is advised when mixing dependencies from different Flink versions (e.g., an older connector), + as you may now end up pulling in multiple versions of a single module (that would previously be prevented by the name being equal). + +#### Reorganize table modules and introduce flink-table-planner-loader + +##### [FLINK-25128](https://issues.apache.org/jira/browse/FLINK-25128) + +The new module `flink-table-planner-loader` replaces `flink-table-planner_2.12` and avoids the need for a Scala suffix. +It is included in the Flink distribution under `lib/`. For backwards compatibility, users can still swap it with +`flink-table-planner_2.12` located in `opt/`. As a consequence, `flink-table-uber` has been split into `flink-table-api-java-uber`, +`flink-table-planner(-loader)`, and `flink-table-runtime`. `flink-sql-client` has no Scala suffix anymore. + +It is recommended to let new projects depend on `flink-table-planner-loader` (without Scala suffix) in provided scope. + +Note that the distribution does not include the Scala API by default. +Scala users need to explicitly add a dependency to `flink-table-api-scala` or `flink-table-api-scala-bridge`. + +#### Remove flink-scala dependency from flink-table-runtime + +##### [FLINK-25114](https://issues.apache.org/jira/browse/FLINK-25114) + +The `flink-table-runtime` has no Scala suffix anymore. +Make sure to include `flink-scala` if the legacy type system (based on TypeInformation) with case classes is still used within Table API. + +#### flink-table uber jar should not include flink-connector-files dependency + +##### [FLINK-24687](https://issues.apache.org/jira/browse/FLINK-24687) + +The table file system connector is not part of the `flink-table-uber` JAR anymore but is a dedicated (but removable) +`flink-connector-files` JAR in the `/lib` directory of a Flink distribution. + +## JDK Upgrade + +The support of Java 8 is now deprecated and will be removed in a future release +([FLINK-25247](https://issues.apache.org/jira/browse/FLINK-25247)). We recommend +all users to migrate to Java 11. + +The default Java version in the Flink docker images is now Java 11 +([FLINK-25251](https://issues.apache.org/jira/browse/FLINK-25251)). +There are images built with Java 8, tagged with “java8”. + +## Drop support for Scala 2.11 + +Support for Scala 2.11 has been removed in +[FLINK-20845](https://issues.apache.org/jira/browse/FLINK-20845). +All Flink dependencies that (transitively) +depend on Scala are suffixed with the Scala version that they are built for, for +example `flink-streaming-scala_2.12`. Users should update all Flink dependecies, +changing "2.11" to "2.12". + +Scala versions (2.11, 2.12, etc.) are not binary compatible with one another. That +also means that there's no guarantee that you can restore from a savepoint, made +with a Flink Scala 2.11 application, if you're upgrading to a Flink Scala 2.12 +application. This depends on the data types that you have been using in your +application. + +Also, the Scala Shell/REPL has been removed in +[FLINK-24360](https://issues.apache.org/jira/browse/FLINK-24360). + +## DataStream API + +#### TypeSerializer version mismatch during eagerly restore + +##### [FLINK-24858](https://issues.apache.org/jira/browse/FLINK-24858) + +This ticket resolves an issue that during state migration between Flink versions +the wrong serializer might have been picked. + +## Table API & SQL + +#### Make the legacy behavior disabled by default + +##### [FLINK-26551](https://issues.apache.org/jira/browse/FLINK-26551) + +The legacy casting behavior has been disabled by default. This might have +implications on corner cases (string parsing, numeric overflows, to string +representation, varchar/binary precisions). Set +`table.exec.legacy-cast-behaviour=ENABLED` to restore the old behavior. Review Comment: Checking with the SQL guys the legacy behavior should not be deprecated in short period since the difference might be very large. It might be removed in, for example, 2.0 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org