pnowojski commented on a change in pull request #12699:
URL: https://github.com/apache/flink/pull/12699#discussion_r442108520



##########
File path: docs/release-notes/flink-1.11.md
##########
@@ -0,0 +1,220 @@
+---
+title: "Release Notes - Flink 1.11"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.10 and Flink 1.11. Please read
+these notes carefully if you are planning to upgrade your Flink version to 
1.11.
+
+* This will be replaced by the TOC
+{:toc}
+
+### Clusters & Deployment
+#### Removal of `LegacyScheduler` 
([FLINK-15629](https://issues.apache.org/jira/browse/FLINK-15629))
+Flink no longer supports the legacy scheduler. 
+Hence, setting `jobmanager.scheduler: legacy` will no longer work and fail 
with an `IllegalArgumentException`. 
+The only valid option for `jobmanager.scheduler` is the default value `ng`.
+
+#### Bind user code class loader to lifetime of a slot 
([FLINK-16408](https://issues.apache.org/jira/browse/FLINK-16408))
+The user code class loader is being reused by the `TaskExecutor` as long as 
there is at least a single slot allocated for the respective job. 
+This changes Flink's recovery behaviour slightly so that it will not reload 
static fields.
+The benefit is that this change drastically reduces pressure on the JVM's 
metaspace.
+
+### Memory Management
+#### Removal of deprecated mesos.resourcemanager.tasks.mem 
([FLINK-15198](https://issues.apache.org/jira/browse/FLINK-15198))
+
+The `mesos.resourcemanager.tasks.mem` option, deprecated in 1.10 in favour of 
`taskmanager.memory.process.size`, has been completely removed and will have no 
effect anymore in 1.11+.
+
+### Table API & SQL
+#### Changed packages of `TableEnvironment` 
([FLINK-15947](https://issues.apache.org/jira/browse/FLINK-15947))
+FLINK-15947    Finish moving scala expression DSL to flink-table-api-scala
+       
+Due to various issues with packages `org.apache.flink.table.api.scala/java` 
all classes from those packages were relocated. 
+Moreover the scala expressions were moved to `org.apache.flink.table.api` as 
anounced in Flink 1.9.
+
+If you used one of:
+* `org.apache.flink.table.api.java.StreamTableEnvironment`
+* `org.apache.flink.table.api.scala.StreamTableEnvironment`
+* `org.apache.flink.table.api.java.BatchTableEnvironment`
+* `org.apache.flink.table.api.scala.BatchTableEnvironment` 
+
+And you do not convert to/from DataStream switch to:
+* `org.apache.flink.table.api.TableEnvironment` 
+
+If you do convert to/from DataStream/DataSet change your imports to one of:
+* `org.apache.flink.table.api.bridge.java.StreamTableEnvironment`
+* `org.apache.flink.table.api.bridge.scala.StreamTableEnvironment`
+* `org.apache.flink.table.api.bridge.java.BatchTableEnvironment`
+* `org.apache.flink.table.api.bridge.scala.BatchTableEnvironment` 
+
+For the Scala expressions use the import:
+* `org.apache.flink.table.api._` instead of 
`org.apache.flink.table.api.bridge.scala._` 
+
+Additionally if you use Scala's implicit conversions to/from 
DataStream/DataSet import `org.apache.flink.table.api.bridge.scala._` instead 
of `org.apache.flink.table.api.scala._`
+
+#### Removal of deprecated `StreamTableSink` 
([FLINK-16362](https://issues.apache.org/jira/browse/FLINK-16362))
+The existing `StreamTableSink` implementations should remove emitDataStream 
method.
+
+#### Removal of `BatchTableSink#emitDataSet` 
([FLINK-16535](https://issues.apache.org/jira/browse/FLINK-16535))
+The existing `BatchTableSink` implementations should rename `emitDataSet` to 
`consumeDataSet` and return `DataSink`.
+  
+#### Corrected execution behavior of TableEnvironment.execute() and 
StreamTableEnvironment.execute() 
([FLINK-16363](https://issues.apache.org/jira/browse/FLINK-16363))
+
+In previous versions, `TableEnvironment.execute()` and 
`StreamExecutionEnvironment.execute()` can both trigger table and DataStream 
programs.
+Since Flink 1.11.0, table programs can only be triggered by 
`TableEnvironment.execute()`. 
+Once table program is converted into DataStream program (through 
`toAppendStream()` or `toRetractStream()` method), it can only be triggered by 
`StreamExecutionEnvironment.execute()`.
+
+#### Corrected execution behavior of ExecutionEnvironment.execute() and 
BatchTableEnvironment.execute() 
([FLINK-17126](https://issues.apache.org/jira/browse/FLINK-17126))
+
+In previous versions, `BatchTableEnvironment.execute()` and 
`ExecutionEnvironment.execute()` can both trigger table and DataSet programs 
for legacy batch planner.
+Since Flink 1.11.0, batch table programs can only be triggered by 
`BatchEnvironment.execute()`.
+Once table program is converted into DataSet program (through `toDataSet()` 
method), it can only be triggered by `ExecutionEnvironment.execute()`.
+
+### Configuration
+
+#### Renamed log4j-yarn-session.properties and logback-yarn.xml properties 
files ([FLINK-17527](https://issues.apache.org/jira/browse/FLINK-17527))
+The logging properties files `log4j-yarn-session.properties` and 
`logback-yarn.xml` haven been renamed into `log4j-session.properties` and 
`logback-session.xml`.
+Moreover, `yarn-session.sh` and `kubernetes-session.sh` use these logging 
properties files.
+
+### State
+#### Removal of deprecated background cleanup toggle (State TTL) 
([FLINK-15620](https://issues.apache.org/jira/browse/FLINK-15620))
+The `StateTtlConfig#cleanupInBackground` has been removed, because the method 
was deprecated and the background TTL was enabled by default in 1.10.
+
+####  Removal if deprecated option to disable TTL compaction filter 
([FLINK-15621](https://issues.apache.org/jira/browse/FLINK-15621))
+The TTL compaction filter in RocksDB has been enabled in 1.10 by default and 
it is now always enabled in 1.11+.
+Because of that the following option and methods have been removed in 1.11: 
+- `state.backend.rocksdb.ttl.compaction.filter.enabled`
+- `StateTtlConfig#cleanupInRocksdbCompactFilter()`
+- `RocksDBStateBackend#isTtlCompactionFilterEnabled`
+- `RocksDBStateBackend#enableTtlCompactionFilter`
+- `RocksDBStateBackend#disableTtlCompactionFilter`
+- (state_backend.py) `is_ttl_compaction_filter_enabled`
+- (state_backend.py) `enable_ttl_compaction_filter`
+- (state_backend.py) `disable_ttl_compaction_filter`
+
+#### Changed argument type of StateBackendFactory#createFromConfig 
([FLINK-16913](https://issues.apache.org/jira/browse/FLINK-16913))
+Starting from Flink 1.11 the `StateBackendFactory#createFromConfig` interface 
now takes `ReadableConfig` instead of `Configuration`. 
+A `Configuration` class is still a valid argument to that method, as it 
implements the ReadableConfig interface. 
+Implementors of custom `StateBackend` should adjust their implementations.
+
+### PyFlink
+#### Throw exceptions for the unsupported data types 
([FLINK-16606](https://issues.apache.org/jira/browse/FLINK-16606))
+DataTypes can be configured with some parameters, e.g., precision.
+However, the precision provided by users takes no effect but a default value 
will always be used. 
+To avoid confusion since Flink 1.11 exceptions will be thrown if the value is 
not supported to make it more visible to users. 
+Changes include:
+- the precision for `TimeType` could only be `0`
+- the length for `VarBinaryType`/`VarCharType` could only be `0x7fffffff`
+- the precision/scale for `DecimalType` could only be `38`/`18`
+- the precision for `TimestampType`/`LocalZonedTimestampType` could only be `3`
+- the resolution for `DayTimeIntervalType` could only be `SECOND` and the 
`fractionalPrecision` could only be `3`
+- the resolution for `YearMonthIntervalType` could only be `MONTH` and the 
`yearPrecision` could only be `2`
+- the `CharType`/`BinaryType`/`ZonedTimestampType` is not supported
+
+### Monitoring
+#### Bla ([FLINK-](https://issues.apache.org/jira/browse/FLINK-))
+
+#### Converted all MetricReporters to plugins 
([FLINK-16963](https://issues.apache.org/jira/browse/FLINK-16963))
+All MetricReporters that come with Flink have been converted to plugins.
+They should no longer be placed into `/lib` directory (doing so may result in 
dependency conflicts!), but `/plugins/<some_directory>` instead.
+
+#### Changed of DataDog's metric reporter Counter metrics 
([FLINK-15438](https://issues.apache.org/jira/browse/FLINK-15438))
+The DataDog metrics reporter now reports counts as the number of events over 
the reporting interval, instead of the total count. 
+This aligns the count semantics with the DataDog documentation.
+
+#### Switch to Log4j 2 by default 
([FLINK-15672](https://issues.apache.org/jira/browse/FLINK-15672))
+Flink now uses Log4j2 by default. 
+Users who wish to revert back to Log4j1 can find instructions to do so in the 
logging documentation.
+
+#### Changed behaviour of JobManager API's log request 
([FLINK-16303](https://issues.apache.org/jira/browse/FLINK-16303))
+Requesting an unavailable log or stdout file from the JobManager's HTTP server 
returns status code 404 now. 
+In previous releases, the HTTP server would return a file with `(file 
unavailable)` as its content.
+
+#### Removal of lastCheckpointAlignmentBuffered metric 
([FLINK-16404](https://issues.apache.org/jira/browse/FLINK-16404))
+Note that the metric of `lastCheckpointAlignmentBuffered` has been removed, 
because the upstream task will not send any following data after barrier until 
alignment on downstream side. 
+But this info in web UI still exists and always shows 0 now. 
+We will also remove it from UI in a follow-up separate ticket future.
+
+### Connectors
+#### Dropped Kafka 0.8/0.9 connectors 
([FLINK-15115](https://issues.apache.org/jira/browse/FLINK-15115))
+The Kafka 0.8 and 0.9 connectors are no longer under active development. 

Review comment:
       @zentol can you clarify this? This comes directly from 
https://issues.apache.org/jira/browse/FLINK-15115 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to