carp84 commented on a change in pull request #12699:
URL: https://github.com/apache/flink/pull/12699#discussion_r442087041



##########
File path: docs/release-notes/flink-1.11.md
##########
@@ -0,0 +1,220 @@
+---
+title: "Release Notes - Flink 1.11"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.10 and Flink 1.11. Please read
+these notes carefully if you are planning to upgrade your Flink version to 
1.11.
+
+* This will be replaced by the TOC
+{:toc}
+
+### Clusters & Deployment
+#### Removal of `LegacyScheduler` 
([FLINK-15629](https://issues.apache.org/jira/browse/FLINK-15629))
+Flink no longer supports the legacy scheduler. 
+Hence, setting `jobmanager.scheduler: legacy` will no longer work and fail 
with an `IllegalArgumentException`. 
+The only valid option for `jobmanager.scheduler` is the default value `ng`.
+
+#### Bind user code class loader to lifetime of a slot 
([FLINK-16408](https://issues.apache.org/jira/browse/FLINK-16408))
+The user code class loader is being reused by the `TaskExecutor` as long as 
there is at least a single slot allocated for the respective job. 
+This changes Flink's recovery behaviour slightly so that it will not reload 
static fields.
+The benefit is that this change drastically reduces pressure on the JVM's 
metaspace.
+
+### Memory Management
+#### Removal of deprecated mesos.resourcemanager.tasks.mem 
([FLINK-15198](https://issues.apache.org/jira/browse/FLINK-15198))
+
+The `mesos.resourcemanager.tasks.mem` option, deprecated in 1.10 in favour of 
`taskmanager.memory.process.size`, has been completely removed and will have no 
effect anymore in 1.11+.
+
+### Table API & SQL
+#### Changed packages of `TableEnvironment` 
([FLINK-15947](https://issues.apache.org/jira/browse/FLINK-15947))
+FLINK-15947    Finish moving scala expression DSL to flink-table-api-scala
+       
+Due to various issues with packages `org.apache.flink.table.api.scala/java` 
all classes from those packages were relocated. 
+Moreover the scala expressions were moved to `org.apache.flink.table.api` as 
anounced in Flink 1.9.
+
+If you used one of:
+* `org.apache.flink.table.api.java.StreamTableEnvironment`
+* `org.apache.flink.table.api.scala.StreamTableEnvironment`
+* `org.apache.flink.table.api.java.BatchTableEnvironment`
+* `org.apache.flink.table.api.scala.BatchTableEnvironment` 
+
+And you do not convert to/from DataStream switch to:
+* `org.apache.flink.table.api.TableEnvironment` 
+
+If you do convert to/from DataStream/DataSet change your imports to one of:
+* `org.apache.flink.table.api.bridge.java.StreamTableEnvironment`
+* `org.apache.flink.table.api.bridge.scala.StreamTableEnvironment`
+* `org.apache.flink.table.api.bridge.java.BatchTableEnvironment`
+* `org.apache.flink.table.api.bridge.scala.BatchTableEnvironment` 
+
+For the Scala expressions use the import:
+* `org.apache.flink.table.api._` instead of 
`org.apache.flink.table.api.bridge.scala._` 
+
+Additionally if you use Scala's implicit conversions to/from 
DataStream/DataSet import `org.apache.flink.table.api.bridge.scala._` instead 
of `org.apache.flink.table.api.scala._`
+
+#### Removal of deprecated `StreamTableSink` 
([FLINK-16362](https://issues.apache.org/jira/browse/FLINK-16362))
+The existing `StreamTableSink` implementations should remove emitDataStream 
method.
+
+#### Removal of `BatchTableSink#emitDataSet` 
([FLINK-16535](https://issues.apache.org/jira/browse/FLINK-16535))
+The existing `BatchTableSink` implementations should rename `emitDataSet` to 
`consumeDataSet` and return `DataSink`.
+  
+#### Corrected execution behavior of TableEnvironment.execute() and 
StreamTableEnvironment.execute() 
([FLINK-16363](https://issues.apache.org/jira/browse/FLINK-16363))
+
+In previous versions, `TableEnvironment.execute()` and 
`StreamExecutionEnvironment.execute()` can both trigger table and DataStream 
programs.
+Since Flink 1.11.0, table programs can only be triggered by 
`TableEnvironment.execute()`. 
+Once table program is converted into DataStream program (through 
`toAppendStream()` or `toRetractStream()` method), it can only be triggered by 
`StreamExecutionEnvironment.execute()`.
+
+#### Corrected execution behavior of ExecutionEnvironment.execute() and 
BatchTableEnvironment.execute() 
([FLINK-17126](https://issues.apache.org/jira/browse/FLINK-17126))
+
+In previous versions, `BatchTableEnvironment.execute()` and 
`ExecutionEnvironment.execute()` can both trigger table and DataSet programs 
for legacy batch planner.
+Since Flink 1.11.0, batch table programs can only be triggered by 
`BatchEnvironment.execute()`.
+Once table program is converted into DataSet program (through `toDataSet()` 
method), it can only be triggered by `ExecutionEnvironment.execute()`.
+
+### Configuration
+
+#### Renamed log4j-yarn-session.properties and logback-yarn.xml properties 
files ([FLINK-17527](https://issues.apache.org/jira/browse/FLINK-17527))
+The logging properties files `log4j-yarn-session.properties` and 
`logback-yarn.xml` haven been renamed into `log4j-session.properties` and 
`logback-session.xml`.
+Moreover, `yarn-session.sh` and `kubernetes-session.sh` use these logging 
properties files.
+
+### State
+#### Removal of deprecated background cleanup toggle (State TTL) 
([FLINK-15620](https://issues.apache.org/jira/browse/FLINK-15620))
+The `StateTtlConfig#cleanupInBackground` has been removed, because the method 
was deprecated and the background TTL was enabled by default in 1.10.
+
+####  Removal if deprecated option to disable TTL compaction filter 
([FLINK-15621](https://issues.apache.org/jira/browse/FLINK-15621))
+The TTL compaction filter in RocksDB has been enabled in 1.10 by default and 
it is now always enabled in 1.11+.
+Because of that the following option and methods have been removed in 1.11: 
+- `state.backend.rocksdb.ttl.compaction.filter.enabled`
+- `StateTtlConfig#cleanupInRocksdbCompactFilter()`
+- `RocksDBStateBackend#isTtlCompactionFilterEnabled`
+- `RocksDBStateBackend#enableTtlCompactionFilter`
+- `RocksDBStateBackend#disableTtlCompactionFilter`
+- (state_backend.py) `is_ttl_compaction_filter_enabled`
+- (state_backend.py) `enable_ttl_compaction_filter`
+- (state_backend.py) `disable_ttl_compaction_filter`
+
+#### Changed argument type of StateBackendFactory#createFromConfig 
([FLINK-16913](https://issues.apache.org/jira/browse/FLINK-16913))
+Starting from Flink 1.11 the `StateBackendFactory#createFromConfig` interface 
now takes `ReadableConfig` instead of `Configuration`. 
+A `Configuration` class is still a valid argument to that method, as it 
implements the ReadableConfig interface. 
+Implementors of custom `StateBackend` should adjust their implementations.
+

Review comment:
       We will need to add the below lines into release note if RC3 is produced:
   
   #### Removal of deprecated OptionsFactory and ConfigurableOptionsFactory 
classes ([FLINK-18242]())
   The deprecated `OptionsFactory` and `ConfigurableOptionsFactory` classes 
have been removed, please use `RocksDBOptionsFactory` and 
`ConfigurableRocksDBOptionsFactory` instead. Please also recompile your 
application codes if any class extending `DefaultConfigurableOptionsFactory`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to