zentol commented on a change in pull request #12699: URL: https://github.com/apache/flink/pull/12699#discussion_r442970962
########## File path: docs/release-notes/flink-1.11.md ########## @@ -0,0 +1,246 @@ +--- +title: "Release Notes - Flink 1.11" +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + + +These release notes discuss important aspects, such as configuration, behavior, +or dependencies, that changed between Flink 1.10 and Flink 1.11. Please read +these notes carefully if you are planning to upgrade your Flink version to 1.11. + +* This will be replaced by the TOC +{:toc} + +### Clusters & Deployment +#### Removal of `LegacyScheduler` ([FLINK-15629](https://issues.apache.org/jira/browse/FLINK-15629)) +Flink no longer supports the legacy scheduler. +Hence, setting `jobmanager.scheduler: legacy` will no longer work and fail with an `IllegalArgumentException`. +The only valid option for `jobmanager.scheduler` is the default value `ng`. + +#### Bind user code class loader to lifetime of a slot ([FLINK-16408](https://issues.apache.org/jira/browse/FLINK-16408)) +The user code class loader is being reused by the `TaskExecutor` as long as there is at least a single slot allocated for the respective job. +This changes Flink's recovery behaviour slightly so that it will not reload static fields. +The benefit is that this change drastically reduces pressure on the JVM's metaspace. + +#### Replaced `slave` file name with `workers` ([FLINK-18307](https://issues.apache.org/jira/browse/FLINK-18307)) +For Standalone Setups, the file with the worker nodes is no longer called `slaves` but `workers`. +Previous setups that use the `start-cluster.sh` and `stop-cluster.sh` scripts simply need to rename that file. Review comment: ```suggestion Previous setups that use the `start-cluster.sh` and `stop-cluster.sh` scripts need to rename that file. ``` In accordance with the documentation guide. ########## File path: docs/release-notes/flink-1.11.md ########## @@ -0,0 +1,246 @@ +--- +title: "Release Notes - Flink 1.11" +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + + +These release notes discuss important aspects, such as configuration, behavior, +or dependencies, that changed between Flink 1.10 and Flink 1.11. Please read +these notes carefully if you are planning to upgrade your Flink version to 1.11. + +* This will be replaced by the TOC +{:toc} + +### Clusters & Deployment +#### Removal of `LegacyScheduler` ([FLINK-15629](https://issues.apache.org/jira/browse/FLINK-15629)) +Flink no longer supports the legacy scheduler. +Hence, setting `jobmanager.scheduler: legacy` will no longer work and fail with an `IllegalArgumentException`. +The only valid option for `jobmanager.scheduler` is the default value `ng`. + +#### Bind user code class loader to lifetime of a slot ([FLINK-16408](https://issues.apache.org/jira/browse/FLINK-16408)) +The user code class loader is being reused by the `TaskExecutor` as long as there is at least a single slot allocated for the respective job. +This changes Flink's recovery behaviour slightly so that it will not reload static fields. +The benefit is that this change drastically reduces pressure on the JVM's metaspace. + +#### Replaced `slave` file name with `workers` ([FLINK-18307](https://issues.apache.org/jira/browse/FLINK-18307)) +For Standalone Setups, the file with the worker nodes is no longer called `slaves` but `workers`. +Previous setups that use the `start-cluster.sh` and `stop-cluster.sh` scripts simply need to rename that file. + +### Memory Management +#### Removal of deprecated mesos.resourcemanager.tasks.mem ([FLINK-15198](https://issues.apache.org/jira/browse/FLINK-15198)) + +The `mesos.resourcemanager.tasks.mem` option, deprecated in 1.10 in favour of `taskmanager.memory.process.size`, has been completely removed and will have no effect anymore in 1.11+. + +### Table API & SQL +#### Blink is now the default planner ([FLINK-16934](https://issues.apache.org/jira/browse/FLINK-16934)) +The default table planner has been changed to blink. + +#### Changed package structure for Table API ([FLINK-15947](https://issues.apache.org/jira/browse/FLINK-15947)) + +Due to various issues with packages `org.apache.flink.table.api.scala/java` all classes from those packages were relocated. +Moreover the scala expressions were moved to `org.apache.flink.table.api` as anounced in Flink 1.9. + +If you used one of: +* `org.apache.flink.table.api.java.StreamTableEnvironment` +* `org.apache.flink.table.api.scala.StreamTableEnvironment` +* `org.apache.flink.table.api.java.BatchTableEnvironment` +* `org.apache.flink.table.api.scala.BatchTableEnvironment` + +And you do not convert to/from DataStream switch to: Review comment: ```suggestion And you do not convert to/from DataStream, switch to: ``` ########## File path: docs/release-notes/flink-1.11.md ########## @@ -0,0 +1,246 @@ +--- +title: "Release Notes - Flink 1.11" +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + + +These release notes discuss important aspects, such as configuration, behavior, +or dependencies, that changed between Flink 1.10 and Flink 1.11. Please read +these notes carefully if you are planning to upgrade your Flink version to 1.11. + +* This will be replaced by the TOC +{:toc} + +### Clusters & Deployment +#### Removal of `LegacyScheduler` ([FLINK-15629](https://issues.apache.org/jira/browse/FLINK-15629)) +Flink no longer supports the legacy scheduler. +Hence, setting `jobmanager.scheduler: legacy` will no longer work and fail with an `IllegalArgumentException`. +The only valid option for `jobmanager.scheduler` is the default value `ng`. + +#### Bind user code class loader to lifetime of a slot ([FLINK-16408](https://issues.apache.org/jira/browse/FLINK-16408)) +The user code class loader is being reused by the `TaskExecutor` as long as there is at least a single slot allocated for the respective job. +This changes Flink's recovery behaviour slightly so that it will not reload static fields. +The benefit is that this change drastically reduces pressure on the JVM's metaspace. + +#### Replaced `slave` file name with `workers` ([FLINK-18307](https://issues.apache.org/jira/browse/FLINK-18307)) +For Standalone Setups, the file with the worker nodes is no longer called `slaves` but `workers`. +Previous setups that use the `start-cluster.sh` and `stop-cluster.sh` scripts simply need to rename that file. + +### Memory Management +#### Removal of deprecated mesos.resourcemanager.tasks.mem ([FLINK-15198](https://issues.apache.org/jira/browse/FLINK-15198)) + +The `mesos.resourcemanager.tasks.mem` option, deprecated in 1.10 in favour of `taskmanager.memory.process.size`, has been completely removed and will have no effect anymore in 1.11+. + +### Table API & SQL +#### Blink is now the default planner ([FLINK-16934](https://issues.apache.org/jira/browse/FLINK-16934)) +The default table planner has been changed to blink. + +#### Changed package structure for Table API ([FLINK-15947](https://issues.apache.org/jira/browse/FLINK-15947)) + +Due to various issues with packages `org.apache.flink.table.api.scala/java` all classes from those packages were relocated. +Moreover the scala expressions were moved to `org.apache.flink.table.api` as anounced in Flink 1.9. + +If you used one of: +* `org.apache.flink.table.api.java.StreamTableEnvironment` +* `org.apache.flink.table.api.scala.StreamTableEnvironment` +* `org.apache.flink.table.api.java.BatchTableEnvironment` +* `org.apache.flink.table.api.scala.BatchTableEnvironment` + +And you do not convert to/from DataStream switch to: +* `org.apache.flink.table.api.TableEnvironment` + +If you do convert to/from DataStream/DataSet change your imports to one of: +* `org.apache.flink.table.api.bridge.java.StreamTableEnvironment` +* `org.apache.flink.table.api.bridge.scala.StreamTableEnvironment` +* `org.apache.flink.table.api.bridge.java.BatchTableEnvironment` +* `org.apache.flink.table.api.bridge.scala.BatchTableEnvironment` + +For the Scala expressions use the import: +* `org.apache.flink.table.api._` instead of `org.apache.flink.table.api.bridge.scala._` + +Additionally if you use Scala's implicit conversions to/from DataStream/DataSet import `org.apache.flink.table.api.bridge.scala._` instead of `org.apache.flink.table.api.scala._` Review comment: ```suggestion Additionally, if you use Scala's implicit conversions to/from DataStream/DataSet, import `org.apache.flink.table.api.bridge.scala._` instead of `org.apache.flink.table.api.scala._` ``` ########## File path: docs/release-notes/flink-1.11.md ########## @@ -0,0 +1,246 @@ +--- +title: "Release Notes - Flink 1.11" +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + + +These release notes discuss important aspects, such as configuration, behavior, +or dependencies, that changed between Flink 1.10 and Flink 1.11. Please read +these notes carefully if you are planning to upgrade your Flink version to 1.11. + +* This will be replaced by the TOC +{:toc} + +### Clusters & Deployment +#### Removal of `LegacyScheduler` ([FLINK-15629](https://issues.apache.org/jira/browse/FLINK-15629)) +Flink no longer supports the legacy scheduler. +Hence, setting `jobmanager.scheduler: legacy` will no longer work and fail with an `IllegalArgumentException`. +The only valid option for `jobmanager.scheduler` is the default value `ng`. + +#### Bind user code class loader to lifetime of a slot ([FLINK-16408](https://issues.apache.org/jira/browse/FLINK-16408)) +The user code class loader is being reused by the `TaskExecutor` as long as there is at least a single slot allocated for the respective job. +This changes Flink's recovery behaviour slightly so that it will not reload static fields. +The benefit is that this change drastically reduces pressure on the JVM's metaspace. + +#### Replaced `slave` file name with `workers` ([FLINK-18307](https://issues.apache.org/jira/browse/FLINK-18307)) +For Standalone Setups, the file with the worker nodes is no longer called `slaves` but `workers`. +Previous setups that use the `start-cluster.sh` and `stop-cluster.sh` scripts simply need to rename that file. + +### Memory Management +#### Removal of deprecated mesos.resourcemanager.tasks.mem ([FLINK-15198](https://issues.apache.org/jira/browse/FLINK-15198)) + +The `mesos.resourcemanager.tasks.mem` option, deprecated in 1.10 in favour of `taskmanager.memory.process.size`, has been completely removed and will have no effect anymore in 1.11+. + +### Table API & SQL +#### Blink is now the default planner ([FLINK-16934](https://issues.apache.org/jira/browse/FLINK-16934)) +The default table planner has been changed to blink. + +#### Changed package structure for Table API ([FLINK-15947](https://issues.apache.org/jira/browse/FLINK-15947)) + +Due to various issues with packages `org.apache.flink.table.api.scala/java` all classes from those packages were relocated. +Moreover the scala expressions were moved to `org.apache.flink.table.api` as anounced in Flink 1.9. + +If you used one of: +* `org.apache.flink.table.api.java.StreamTableEnvironment` +* `org.apache.flink.table.api.scala.StreamTableEnvironment` +* `org.apache.flink.table.api.java.BatchTableEnvironment` +* `org.apache.flink.table.api.scala.BatchTableEnvironment` + +And you do not convert to/from DataStream switch to: +* `org.apache.flink.table.api.TableEnvironment` + +If you do convert to/from DataStream/DataSet change your imports to one of: +* `org.apache.flink.table.api.bridge.java.StreamTableEnvironment` +* `org.apache.flink.table.api.bridge.scala.StreamTableEnvironment` +* `org.apache.flink.table.api.bridge.java.BatchTableEnvironment` +* `org.apache.flink.table.api.bridge.scala.BatchTableEnvironment` + +For the Scala expressions use the import: +* `org.apache.flink.table.api._` instead of `org.apache.flink.table.api.bridge.scala._` + +Additionally if you use Scala's implicit conversions to/from DataStream/DataSet import `org.apache.flink.table.api.bridge.scala._` instead of `org.apache.flink.table.api.scala._` + +#### Removal of deprecated `StreamTableSink` ([FLINK-16362](https://issues.apache.org/jira/browse/FLINK-16362)) +The existing `StreamTableSink` implementations should remove `emitDataStream` method. + +#### Removal of `BatchTableSink#emitDataSet` ([FLINK-16535](https://issues.apache.org/jira/browse/FLINK-16535)) +The existing `BatchTableSink` implementations should rename `emitDataSet` to `consumeDataSet` and return `DataSink`. + +#### Corrected execution behavior of TableEnvironment.execute() and StreamTableEnvironment.execute() ([FLINK-16363](https://issues.apache.org/jira/browse/FLINK-16363)) +In previous versions, `TableEnvironment.execute()` and `StreamExecutionEnvironment.execute()` can both trigger table and DataStream programs. +Since Flink 1.11.0, table programs can only be triggered by `TableEnvironment.execute()`. +Once table program is converted into DataStream program (through `toAppendStream()` or `toRetractStream()` method), it can only be triggered by `StreamExecutionEnvironment.execute()`. + +#### Corrected execution behavior of ExecutionEnvironment.execute() and BatchTableEnvironment.execute() ([FLINK-17126](https://issues.apache.org/jira/browse/FLINK-17126)) +In previous versions, `BatchTableEnvironment.execute()` and `ExecutionEnvironment.execute()` can both trigger table and DataSet programs for legacy batch planner. +Since Flink 1.11.0, batch table programs can only be triggered by `BatchEnvironment.execute()`. +Once table program is converted into DataSet program (through `toDataSet()` method), it can only be triggered by `ExecutionEnvironment.execute()`. + +#### Added a changeflag to Row type ([FLINK-16998](https://issues.apache.org/jira/browse/FLINK-16998)) +An additional change flag called `RowKind` was added to the `Row` type. +This changed the serialization format and will trigger a state migration. + +### Configuration + +#### Renamed log4j-yarn-session.properties and logback-yarn.xml properties files ([FLINK-17527](https://issues.apache.org/jira/browse/FLINK-17527)) +The logging properties files `log4j-yarn-session.properties` and `logback-yarn.xml` haven been renamed into `log4j-session.properties` and `logback-session.xml`. Review comment: ```suggestion The logging properties files `log4j-yarn-session.properties` and `logback-yarn.xml` have been renamed to `log4j-session.properties` and `logback-session.xml`. ``` ########## File path: docs/release-notes/flink-1.11.md ########## @@ -0,0 +1,246 @@ +--- +title: "Release Notes - Flink 1.11" +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + + +These release notes discuss important aspects, such as configuration, behavior, +or dependencies, that changed between Flink 1.10 and Flink 1.11. Please read +these notes carefully if you are planning to upgrade your Flink version to 1.11. + +* This will be replaced by the TOC +{:toc} + +### Clusters & Deployment +#### Removal of `LegacyScheduler` ([FLINK-15629](https://issues.apache.org/jira/browse/FLINK-15629)) +Flink no longer supports the legacy scheduler. +Hence, setting `jobmanager.scheduler: legacy` will no longer work and fail with an `IllegalArgumentException`. +The only valid option for `jobmanager.scheduler` is the default value `ng`. + +#### Bind user code class loader to lifetime of a slot ([FLINK-16408](https://issues.apache.org/jira/browse/FLINK-16408)) +The user code class loader is being reused by the `TaskExecutor` as long as there is at least a single slot allocated for the respective job. +This changes Flink's recovery behaviour slightly so that it will not reload static fields. +The benefit is that this change drastically reduces pressure on the JVM's metaspace. + +#### Replaced `slave` file name with `workers` ([FLINK-18307](https://issues.apache.org/jira/browse/FLINK-18307)) +For Standalone Setups, the file with the worker nodes is no longer called `slaves` but `workers`. +Previous setups that use the `start-cluster.sh` and `stop-cluster.sh` scripts simply need to rename that file. + +### Memory Management +#### Removal of deprecated mesos.resourcemanager.tasks.mem ([FLINK-15198](https://issues.apache.org/jira/browse/FLINK-15198)) + +The `mesos.resourcemanager.tasks.mem` option, deprecated in 1.10 in favour of `taskmanager.memory.process.size`, has been completely removed and will have no effect anymore in 1.11+. + +### Table API & SQL +#### Blink is now the default planner ([FLINK-16934](https://issues.apache.org/jira/browse/FLINK-16934)) +The default table planner has been changed to blink. + +#### Changed package structure for Table API ([FLINK-15947](https://issues.apache.org/jira/browse/FLINK-15947)) + +Due to various issues with packages `org.apache.flink.table.api.scala/java` all classes from those packages were relocated. +Moreover the scala expressions were moved to `org.apache.flink.table.api` as anounced in Flink 1.9. + +If you used one of: +* `org.apache.flink.table.api.java.StreamTableEnvironment` +* `org.apache.flink.table.api.scala.StreamTableEnvironment` +* `org.apache.flink.table.api.java.BatchTableEnvironment` +* `org.apache.flink.table.api.scala.BatchTableEnvironment` + +And you do not convert to/from DataStream switch to: +* `org.apache.flink.table.api.TableEnvironment` + +If you do convert to/from DataStream/DataSet change your imports to one of: Review comment: ```suggestion If you do convert to/from DataStream/DataSet, change your imports to one of: ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org