This is an automated email from the ASF dual-hosted git repository.

hequn pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit c1c91ecb7dcd23355b58585a5d154e1d05d52765
Author: hequn8128 <chenghe...@gmail.com>
AuthorDate: Mon Jun 1 20:09:06 2020 +0800

    Rebuild website
---
 content/blog/feed.xml                         | 883 ++++++++++++++++++--------
 content/news/2020/05/07/community-update.html |   2 +-
 2 files changed, 630 insertions(+), 255 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 042b7bd..53492a2 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,629 @@
 <atom:link href="https://flink.apache.org/blog/feed.xml"; rel="self" 
type="application/rss+xml" />
 
 <item>
+<title>Apache Flink 1.10.1 Released</title>
+<description>&lt;p&gt;The Apache Flink community released the first bugfix 
version of the Apache Flink 1.10 series.&lt;/p&gt;
+
+&lt;p&gt;This release includes 158 fixes and minor improvements for Flink 
1.10.0. The list below includes a detailed list of all fixes and 
improvements.&lt;/p&gt;
+
+&lt;p&gt;We highly recommend all users to upgrade to Flink 1.10.1.&lt;/p&gt;
+
+&lt;div class=&quot;alert alert-info&quot;&gt;
+  &lt;p&gt;&lt;span class=&quot;label label-info&quot; style=&quot;display: 
inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; 
aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Note&lt;/span&gt;
+FLINK-16684 changed the builders of the StreamingFileSink to make them 
compilable in Scala. This change is source compatible but binary incompatible. 
If using the StreamingFileSink, please recompile your user code against 1.10.1 
before upgrading.&lt;/p&gt;
+&lt;/div&gt;
+
+&lt;div class=&quot;alert alert-info&quot;&gt;
+  &lt;p&gt;&lt;span class=&quot;label label-info&quot; style=&quot;display: 
inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; 
aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Note&lt;/span&gt;
+FLINK-16683 Flink no longer supports starting clusters with .bat scripts. 
Users should instead use environments like WSL or Cygwin and work with the .sh 
scripts.&lt;/p&gt;
+&lt;/div&gt;
+
+&lt;p&gt;Updated Maven dependencies:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code 
class=&quot;language-xml&quot;&gt;&lt;span 
class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span 
class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span
 class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span 
class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-java&lt;span
 class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span 
class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.10.1&lt;span 
class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span 
class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span
 class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span 
class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span
 class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span 
class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.10.1&lt;span 
class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span 
class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span
 class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span 
class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-clients_2.11&lt;span
 class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span 
class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.10.1&lt;span 
class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span 
class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;You can find the binaries on the updated &lt;a 
href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;List of resolved issues:&lt;/p&gt;
+
+&lt;h2&gt;        Sub-task
+&lt;/h2&gt;
+&lt;ul&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-14126&quot;&gt;FLINK-14126&lt;/a&gt;]
 -         Elasticsearch Xpack Machine Learning doesn&amp;#39;t support ARM
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15143&quot;&gt;FLINK-15143&lt;/a&gt;]
 -         Create document for FLIP-49 TM memory model and configuration guide
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15561&quot;&gt;FLINK-15561&lt;/a&gt;]
 -         Unify Kerberos credentials checking
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15790&quot;&gt;FLINK-15790&lt;/a&gt;]
 -         Make FlinkKubeClient and its implementations asynchronous
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15817&quot;&gt;FLINK-15817&lt;/a&gt;]
 -         Kubernetes Resource leak while deployment exception happens
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16049&quot;&gt;FLINK-16049&lt;/a&gt;]
 -         Remove outdated &amp;quot;Best Practices&amp;quot; section from 
Application Development Section
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16131&quot;&gt;FLINK-16131&lt;/a&gt;]
 -         Translate &amp;quot;Amazon S3&amp;quot; page of &amp;quot;File 
Systems&amp;quot; into Chinese
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16389&quot;&gt;FLINK-16389&lt;/a&gt;]
 -         Bump Kafka 0.10 to 0.10.2.2
+&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h2&gt;        Bug
+&lt;/h2&gt;
+&lt;ul&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-2336&quot;&gt;FLINK-2336&lt;/a&gt;]
 -         ArrayIndexOufOBoundsException in TypeExtractor when mapping
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-10918&quot;&gt;FLINK-10918&lt;/a&gt;]
 -         incremental Keyed state with RocksDB throws cannot create directory 
error in windows
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-11193&quot;&gt;FLINK-11193&lt;/a&gt;]
 -         Rocksdb timer service factory configuration option is not settable 
per job
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-13483&quot;&gt;FLINK-13483&lt;/a&gt;]
 -         PrestoS3FileSystemITCase.testDirectoryListing fails on Travis
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-14038&quot;&gt;FLINK-14038&lt;/a&gt;]
 -         ExecutionGraph deploy failed due to akka timeout
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-14311&quot;&gt;FLINK-14311&lt;/a&gt;]
 -         Streaming File Sink end-to-end test failed on Travis
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-14316&quot;&gt;FLINK-14316&lt;/a&gt;]
 -         Stuck in &amp;quot;Job leader ... lost leadership&amp;quot; error
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15417&quot;&gt;FLINK-15417&lt;/a&gt;]
 -         Remove the docker volume or mount when starting Mesos e2e cluster
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15669&quot;&gt;FLINK-15669&lt;/a&gt;]
 -         SQL client can&amp;#39;t cancel flink job
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15772&quot;&gt;FLINK-15772&lt;/a&gt;]
 -         Shaded Hadoop S3A with credentials provider end-to-end test fails on 
travis
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15811&quot;&gt;FLINK-15811&lt;/a&gt;]
 -         StreamSourceOperatorWatermarksTest.testNoMaxWatermarkOnAsyncCancel 
fails on Travis
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15812&quot;&gt;FLINK-15812&lt;/a&gt;]
 -         HistoryServer archiving is done in Dispatcher main thread
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15838&quot;&gt;FLINK-15838&lt;/a&gt;]
 -         Dangling CountDownLatch.await(timeout)
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15852&quot;&gt;FLINK-15852&lt;/a&gt;]
 -         Job is submitted to the wrong session cluster
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15904&quot;&gt;FLINK-15904&lt;/a&gt;]
 -         Make Kafka Consumer work with activated 
&amp;quot;disableGenericTypes()&amp;quot;
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15936&quot;&gt;FLINK-15936&lt;/a&gt;]
 -         TaskExecutorTest#testSlotAcceptance deadlocks
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15953&quot;&gt;FLINK-15953&lt;/a&gt;]
 -         Job Status is hard to read for some Statuses
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16013&quot;&gt;FLINK-16013&lt;/a&gt;]
 -         List and map config options could not be parsed correctly
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16014&quot;&gt;FLINK-16014&lt;/a&gt;]
 -         S3 plugin ClassNotFoundException SAXParser
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16025&quot;&gt;FLINK-16025&lt;/a&gt;]
 -         Service could expose blob server port mismatched with JM Container
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16026&quot;&gt;FLINK-16026&lt;/a&gt;]
 -         Travis failed due to python setup
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16047&quot;&gt;FLINK-16047&lt;/a&gt;]
 -         Blink planner produces wrong aggregate results with state clean up
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16067&quot;&gt;FLINK-16067&lt;/a&gt;]
 -         Flink&amp;#39;s CalciteParser swallows error position information
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16068&quot;&gt;FLINK-16068&lt;/a&gt;]
 -         table with keyword-escaped columns and computed_column_expression 
columns
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16070&quot;&gt;FLINK-16070&lt;/a&gt;]
 -         Blink planner can not extract correct unique key for 
UpsertStreamTableSink
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16108&quot;&gt;FLINK-16108&lt;/a&gt;]
 -         StreamSQLExample is failed if running in blink planner
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16111&quot;&gt;FLINK-16111&lt;/a&gt;]
 -         Kubernetes deployment does not respect 
&amp;quot;taskmanager.cpu.cores&amp;quot;.
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16113&quot;&gt;FLINK-16113&lt;/a&gt;]
 -         ExpressionReducer shouldn&amp;#39;t escape the reduced string value
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16115&quot;&gt;FLINK-16115&lt;/a&gt;]
 -         Aliyun oss filesystem could not work with plugin mechanism
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16139&quot;&gt;FLINK-16139&lt;/a&gt;]
 -         Co-location constraints are not reset on task recovery in 
DefaultScheduler
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16161&quot;&gt;FLINK-16161&lt;/a&gt;]
 -         Statistics zero should be unknown in HiveCatalog
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16170&quot;&gt;FLINK-16170&lt;/a&gt;]
 -         SearchTemplateRequest ClassNotFoundException when use 
flink-sql-connector-elasticsearch7
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16220&quot;&gt;FLINK-16220&lt;/a&gt;]
 -         JsonRowSerializationSchema throws cast exception : NullNode cannot 
be cast to ArrayNode
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16231&quot;&gt;FLINK-16231&lt;/a&gt;]
 -         Hive connector is missing jdk.tools exclusion against Hive 2.x.x
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16234&quot;&gt;FLINK-16234&lt;/a&gt;]
 -         Fix unstable cases in StreamingJobGraphGeneratorTest
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16241&quot;&gt;FLINK-16241&lt;/a&gt;]
 -         Remove the license and notice file in flink-ml-lib module on 
release-1.10 branch
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16242&quot;&gt;FLINK-16242&lt;/a&gt;]
 -         BinaryGeneric serialization error cause checkpoint failure
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16262&quot;&gt;FLINK-16262&lt;/a&gt;]
 -         Class loader problem with FlinkKafkaProducer.Semantic.EXACTLY_ONCE 
and usrlib directory
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16269&quot;&gt;FLINK-16269&lt;/a&gt;]
 -         Generic type can not be matched when convert table to stream.
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16281&quot;&gt;FLINK-16281&lt;/a&gt;]
 -         parameter &amp;#39;maxRetryTimes&amp;#39; can not work in 
JDBCUpsertTableSink
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16301&quot;&gt;FLINK-16301&lt;/a&gt;]
 -         Annoying &amp;quot;Cannot find FunctionDefinition&amp;quot; messages 
with SQL for f_proctime or =
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16308&quot;&gt;FLINK-16308&lt;/a&gt;]
 -         SQL connector download links are broken
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16313&quot;&gt;FLINK-16313&lt;/a&gt;]
 -         flink-state-processor-api: surefire execution unstable on Azure
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16331&quot;&gt;FLINK-16331&lt;/a&gt;]
 -         Remove source licenses for old WebUI
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16345&quot;&gt;FLINK-16345&lt;/a&gt;]
 -         Computed column can not refer time attribute column
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16360&quot;&gt;FLINK-16360&lt;/a&gt;]
 -          connector on hive 2.0.1 don&amp;#39;t  support type conversion from 
STRING to VARCHAR
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16371&quot;&gt;FLINK-16371&lt;/a&gt;]
 -         HadoopCompressionBulkWriter fails with 
&amp;#39;java.io.NotSerializableException&amp;#39;
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16373&quot;&gt;FLINK-16373&lt;/a&gt;]
 -         EmbeddedLeaderService: IllegalStateException: The RPC connection is 
already closed
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16413&quot;&gt;FLINK-16413&lt;/a&gt;]
 -         Reduce hive source parallelism when limit push down
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16414&quot;&gt;FLINK-16414&lt;/a&gt;]
 -         create udaf/udtf function using sql casuing ValidationException: SQL 
validation failed. null
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16433&quot;&gt;FLINK-16433&lt;/a&gt;]
 -         TableEnvironmentImpl doesn&amp;#39;t clear buffered operations when 
it fails to translate the operation
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16435&quot;&gt;FLINK-16435&lt;/a&gt;]
 -         Replace since decorator with versionadd to mark the version an API 
was introduced
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16467&quot;&gt;FLINK-16467&lt;/a&gt;]
 -         MemorySizeTest#testToHumanReadableString() is not portable
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16526&quot;&gt;FLINK-16526&lt;/a&gt;]
 -         Fix exception when computed column expression references a keyword 
column name
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16541&quot;&gt;FLINK-16541&lt;/a&gt;]
 -         Document of table.exec.shuffle-mode is incorrect
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16550&quot;&gt;FLINK-16550&lt;/a&gt;]
 -         HadoopS3* tests fail with NullPointerException exceptions
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16560&quot;&gt;FLINK-16560&lt;/a&gt;]
 -         Forward Configuration in PackagedProgramUtils#getPipelineFromProgram
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16567&quot;&gt;FLINK-16567&lt;/a&gt;]
 -         Get the API error of the StreamQueryConfig on Page &amp;quot;Query 
Configuration&amp;quot;
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16573&quot;&gt;FLINK-16573&lt;/a&gt;]
 -         Kinesis consumer does not properly shutdown RecordFetcher threads
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16576&quot;&gt;FLINK-16576&lt;/a&gt;]
 -         State inconsistency on restore with memory state backends
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16626&quot;&gt;FLINK-16626&lt;/a&gt;]
 -         Prevent REST handler from being closed more than once
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16632&quot;&gt;FLINK-16632&lt;/a&gt;]
 -         SqlDateTimeUtils#toSqlTimestamp(String, String) may yield incorrect 
result
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16635&quot;&gt;FLINK-16635&lt;/a&gt;]
 -         Incompatible okio dependency in flink-metrics-influxdb module
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16646&quot;&gt;FLINK-16646&lt;/a&gt;]
 -         flink read orc file throw a NullPointerException
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16647&quot;&gt;FLINK-16647&lt;/a&gt;]
 -         Miss file extension when inserting to hive table with compression
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16652&quot;&gt;FLINK-16652&lt;/a&gt;]
 -         BytesColumnVector should init buffer in Hive 3.x
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16662&quot;&gt;FLINK-16662&lt;/a&gt;]
 -         Blink Planner failed to generate JobGraph for POJO DataStream 
converting to Table (Cannot determine simple type name)
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16664&quot;&gt;FLINK-16664&lt;/a&gt;]
 -         Unable to set DataStreamSource parallelism to default (-1)
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16675&quot;&gt;FLINK-16675&lt;/a&gt;]
 -         TableEnvironmentITCase. testClearOperation fails on travis nightly 
build
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16684&quot;&gt;FLINK-16684&lt;/a&gt;]
 -         StreamingFileSink builder does not work with Scala
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16696&quot;&gt;FLINK-16696&lt;/a&gt;]
 -         Savepoint trigger documentation is insufficient
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16703&quot;&gt;FLINK-16703&lt;/a&gt;]
 -         AkkaRpcActor state machine does not record transition to terminating 
state.
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16705&quot;&gt;FLINK-16705&lt;/a&gt;]
 -         LocalExecutor tears down MiniCluster before client can retrieve 
JobResult
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16718&quot;&gt;FLINK-16718&lt;/a&gt;]
 -         KvStateServerHandlerTest leaks Netty ByteBufs
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16727&quot;&gt;FLINK-16727&lt;/a&gt;]
 -         Fix cast exception when having time point literal as parameters
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16732&quot;&gt;FLINK-16732&lt;/a&gt;]
 -         Failed to call Hive UDF with constant return value
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16740&quot;&gt;FLINK-16740&lt;/a&gt;]
 -         OrcSplitReaderUtil::logicalTypeToOrcType fails to create decimal 
type with precision &amp;lt; 10
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16759&quot;&gt;FLINK-16759&lt;/a&gt;]
 -         HiveModuleTest failed to compile on release-1.10
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16767&quot;&gt;FLINK-16767&lt;/a&gt;]
 -         Failed to read Hive table with RegexSerDe
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16771&quot;&gt;FLINK-16771&lt;/a&gt;]
 -         NPE when filtering by decimal column
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16821&quot;&gt;FLINK-16821&lt;/a&gt;]
 -         Run Kubernetes test failed with invalid named 
&amp;quot;minikube&amp;quot;
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16822&quot;&gt;FLINK-16822&lt;/a&gt;]
 -         The config set by SET command does not work
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16825&quot;&gt;FLINK-16825&lt;/a&gt;]
 -         PrometheusReporterEndToEndITCase should rely on path returned by 
DownloadCache
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16836&quot;&gt;FLINK-16836&lt;/a&gt;]
 -         Losing leadership does not clear rpc connection in 
JobManagerLeaderListener
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16860&quot;&gt;FLINK-16860&lt;/a&gt;]
 -         Failed to push filter into OrcTableSource when upgrading to 1.9.2
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16888&quot;&gt;FLINK-16888&lt;/a&gt;]
 -         Re-add jquery license file under &amp;quot;/licenses&amp;quot;
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16901&quot;&gt;FLINK-16901&lt;/a&gt;]
 -         Flink Kinesis connector NOTICE should have contents of AWS 
KPL&amp;#39;s THIRD_PARTY_NOTICES file manually merged in
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16913&quot;&gt;FLINK-16913&lt;/a&gt;]
 -         ReadableConfigToConfigurationAdapter#getEnum throws 
UnsupportedOperationException
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16916&quot;&gt;FLINK-16916&lt;/a&gt;]
 -         The logic of NullableSerializer#copy is wrong
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16944&quot;&gt;FLINK-16944&lt;/a&gt;]
 -         Compile error in. DumpCompiledPlanTest and PreviewPlanDumpTest
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16980&quot;&gt;FLINK-16980&lt;/a&gt;]
 -         Python UDF doesn&amp;#39;t work with protobuf 3.6.1
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16981&quot;&gt;FLINK-16981&lt;/a&gt;]
 -         flink-runtime tests are crashing the JVM on Java11 because of 
PowerMock
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17062&quot;&gt;FLINK-17062&lt;/a&gt;]
 -         Fix the conversion from Java row type to Python row type
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17066&quot;&gt;FLINK-17066&lt;/a&gt;]
 -         Update pyarrow version bounds less than 0.14.0
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17093&quot;&gt;FLINK-17093&lt;/a&gt;]
 -         Python UDF doesn&amp;#39;t work when the input column is from 
composite field
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17107&quot;&gt;FLINK-17107&lt;/a&gt;]
 -         CheckpointCoordinatorConfiguration#isExactlyOnce() is inconsistent 
with StreamConfig#getCheckpointMode()
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17114&quot;&gt;FLINK-17114&lt;/a&gt;]
 -         When the pyflink job runs in local mode and the command 
&amp;quot;python&amp;quot; points to Python 2.7, the startup of the Python UDF 
worker will fail.
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17124&quot;&gt;FLINK-17124&lt;/a&gt;]
 -         The PyFlink Job runs into infinite loop if the Python UDF imports 
job code
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17152&quot;&gt;FLINK-17152&lt;/a&gt;]
 -         FunctionDefinitionUtil generate wrong resultType and  acc type of 
AggregateFunctionDefinition
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17308&quot;&gt;FLINK-17308&lt;/a&gt;]
 -         ExecutionGraphCache cachedExecutionGraphs not cleanup cause OOM Bug
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17313&quot;&gt;FLINK-17313&lt;/a&gt;]
 -         Validation error when insert decimal/varchar with precision into 
sink using TypeInformation of row
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17334&quot;&gt;FLINK-17334&lt;/a&gt;]
 -          Flink does not support HIVE UDFs with primitive return types
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17338&quot;&gt;FLINK-17338&lt;/a&gt;]
 -         LocalExecutorITCase.testBatchQueryCancel test timeout
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17359&quot;&gt;FLINK-17359&lt;/a&gt;]
 -         Entropy key is not resolved if flink-s3-fs-hadoop is added as a 
plugin
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17403&quot;&gt;FLINK-17403&lt;/a&gt;]
 -         Fix invalid classpath in BashJavaUtilsITCase
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17471&quot;&gt;FLINK-17471&lt;/a&gt;]
 -         Move LICENSE and NOTICE files to root directory of python 
distribution
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17483&quot;&gt;FLINK-17483&lt;/a&gt;]
 -         Update flink-sql-connector-elasticsearch7 NOTICE file to correctly 
reflect bundled dependencies
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17496&quot;&gt;FLINK-17496&lt;/a&gt;]
 -         Performance regression with amazon-kinesis-producer 0.13.1 in Flink 
1.10.x
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17499&quot;&gt;FLINK-17499&lt;/a&gt;]
 -         LazyTimerService used to register timers via State Processing API 
incorrectly mixes event time timers with processing time timers
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17514&quot;&gt;FLINK-17514&lt;/a&gt;]
 -         TaskCancelerWatchdog does not kill TaskManager
+&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h2&gt;        New Feature
+&lt;/h2&gt;
+&lt;ul&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17275&quot;&gt;FLINK-17275&lt;/a&gt;]
 -         Add core training exercises
+&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h2&gt;        Improvement
+&lt;/h2&gt;
+&lt;ul&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-9656&quot;&gt;FLINK-9656&lt;/a&gt;]
 -         Environment java opts for flink run
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15094&quot;&gt;FLINK-15094&lt;/a&gt;]
 -         Warning about using private constructor of java.nio.DirectByteBuffer 
in Java 11
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15584&quot;&gt;FLINK-15584&lt;/a&gt;]
 -         Give nested data type of ROWs in ValidationException
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15616&quot;&gt;FLINK-15616&lt;/a&gt;]
 -         Move boot error messages from python-udf-boot.log to 
taskmanager&amp;#39;s log file
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15989&quot;&gt;FLINK-15989&lt;/a&gt;]
 -         Rewrap OutOfMemoryError in allocateUnpooledOffHeap with better 
message
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16018&quot;&gt;FLINK-16018&lt;/a&gt;]
 -         Improve error reporting when submitting batch job (instead of 
AskTimeoutException)
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16125&quot;&gt;FLINK-16125&lt;/a&gt;]
 -         Make zookeeper.connect optional for Kafka connectors
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16167&quot;&gt;FLINK-16167&lt;/a&gt;]
 -         Update documentation about python shell execution
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16191&quot;&gt;FLINK-16191&lt;/a&gt;]
 -         Improve error message on Windows when RocksDB Paths are too long
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16280&quot;&gt;FLINK-16280&lt;/a&gt;]
 -         Fix sample code errors in the documentation about elasticsearch 
connector
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16288&quot;&gt;FLINK-16288&lt;/a&gt;]
 -         Setting the TTL for discarding task pods on Kubernetes.
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16293&quot;&gt;FLINK-16293&lt;/a&gt;]
 -         Document using plugins in Kubernetes
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16343&quot;&gt;FLINK-16343&lt;/a&gt;]
 -         Improve exception message when reading an unbounded source in batch 
mode
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16406&quot;&gt;FLINK-16406&lt;/a&gt;]
 -         Increase default value for JVM Metaspace to minimise its 
OutOfMemoryError
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16538&quot;&gt;FLINK-16538&lt;/a&gt;]
 -         Restructure Python Table API documentation
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16604&quot;&gt;FLINK-16604&lt;/a&gt;]
 -         Column key in JM configuration is too narrow
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16683&quot;&gt;FLINK-16683&lt;/a&gt;]
 -         Remove scripts for starting Flink on Windows
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16697&quot;&gt;FLINK-16697&lt;/a&gt;]
 -         Disable JMX rebinding
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16763&quot;&gt;FLINK-16763&lt;/a&gt;]
 -         Should not use BatchTableEnvironment for Python UDF in the document 
of flink-1.10
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16772&quot;&gt;FLINK-16772&lt;/a&gt;]
 -         Bump derby to 10.12.1.1+ or exclude it
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16790&quot;&gt;FLINK-16790&lt;/a&gt;]
 -         enables the interpretation of backslash escapes
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16862&quot;&gt;FLINK-16862&lt;/a&gt;]
 -         Remove example url in quickstarts
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16874&quot;&gt;FLINK-16874&lt;/a&gt;]
 -         Respect the dynamic options when calculating memory options in 
taskmanager.sh
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16942&quot;&gt;FLINK-16942&lt;/a&gt;]
 -         ES 5 sink should allow users to select netty transport client
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17065&quot;&gt;FLINK-17065&lt;/a&gt;]
 -         Add documentation about the Python versions supported for PyFlink
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17125&quot;&gt;FLINK-17125&lt;/a&gt;]
 -         Add a Usage Notes Page to Answer Common Questions Encountered by 
PyFlink Users
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17254&quot;&gt;FLINK-17254&lt;/a&gt;]
 -         Improve the PyFlink documentation and examples to use SQL DDL for 
source/sink definition
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17276&quot;&gt;FLINK-17276&lt;/a&gt;]
 -         Add checkstyle to training exercises
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17277&quot;&gt;FLINK-17277&lt;/a&gt;]
 -         Apply IntelliJ recommendations to training exercises
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17278&quot;&gt;FLINK-17278&lt;/a&gt;]
 -         Add Travis to the training exercises
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17279&quot;&gt;FLINK-17279&lt;/a&gt;]
 -         Use gradle build scans for training exercises
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-17316&quot;&gt;FLINK-17316&lt;/a&gt;]
 -         Have HourlyTips solutions use TumblingEventTimeWindows.of
+&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h2&gt;        Task
+&lt;/h2&gt;
+&lt;ul&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15741&quot;&gt;FLINK-15741&lt;/a&gt;]
 -         Fix TTL docs after enabling RocksDB compaction filter by default 
(needs Chinese translation)
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15933&quot;&gt;FLINK-15933&lt;/a&gt;]
 -         update content of how generic table schema is stored in hive via 
HiveCatalog
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-15991&quot;&gt;FLINK-15991&lt;/a&gt;]
 -         Create Chinese documentation for FLIP-49 TM memory model
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16004&quot;&gt;FLINK-16004&lt;/a&gt;]
 -         Exclude flink-rocksdb-state-memory-control-test jars from the dist
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16454&quot;&gt;FLINK-16454&lt;/a&gt;]
 -         Update the copyright year in NOTICE files
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16530&quot;&gt;FLINK-16530&lt;/a&gt;]
 -         Add documentation about &amp;quot;GROUPING SETS&amp;quot; and 
&amp;quot;CUBE&amp;quot; support in streaming mode
+&lt;/li&gt;
+&lt;li&gt;[&lt;a 
href=&quot;https://issues.apache.org/jira/browse/FLINK-16592&quot;&gt;FLINK-16592&lt;/a&gt;]
 -         The doc of Streaming File Sink has a mistake of grammar
+&lt;/li&gt;
+&lt;/ul&gt;
+
+</description>
+<pubDate>Tue, 12 May 2020 14:00:00 +0200</pubDate>
+<link>https://flink.apache.org/news/2020/05/12/release-1.10.1.html</link>
+<guid isPermaLink="true">/news/2020/05/12/release-1.10.1.html</guid>
+</item>
+
+<item>
+<title>Flink Community Update - May&#39;20</title>
+<description>&lt;p&gt;Can you smell it? It’s release month! It took a while, 
but now that we’re &lt;a 
href=&quot;https://flink.apache.org/news/2020/04/01/community-update.html&quot;&gt;all
 caught up with the past&lt;/a&gt;, the Community Update is here to stay. This 
time around, we’re warming up for Flink 1.11 and peeping back to the month of 
April in the Flink community — with the release of Stateful Functions 2.0, a 
new self-paced Flink training and some efforts to improve the Flink do [...]
+
+&lt;p&gt;Last month also marked the debut of Flink Forward Virtual Conference 
2020: what did you think? If you missed it altogether or just want to recap 
some of the sessions, the &lt;a 
href=&quot;https://www.youtube.com/playlist?list=PLDX4T_cnKjD0ngnBSU-bYGfgVv17MiwA7&quot;&gt;videos&lt;/a&gt;
 and &lt;a 
href=&quot;https://www.slideshare.net/FlinkForward&quot;&gt;slides&lt;/a&gt; 
are now available!&lt;/p&gt;
+
+&lt;div class=&quot;page-toc&quot;&gt;
+&lt;ul id=&quot;markdown-toc&quot;&gt;
+  &lt;li&gt;&lt;a href=&quot;#the-past-month-in-flink&quot; 
id=&quot;markdown-toc-the-past-month-in-flink&quot;&gt;The Past Month in 
Flink&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#flink-stateful-functions-20-is-out&quot; 
id=&quot;markdown-toc-flink-stateful-functions-20-is-out&quot;&gt;Flink 
Stateful Functions 2.0 is out!&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#warming-up-for-flink-111&quot; 
id=&quot;markdown-toc-warming-up-for-flink-111&quot;&gt;Warming up for Flink 
1.11&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#flink-minor-releases&quot; 
id=&quot;markdown-toc-flink-minor-releases&quot;&gt;Flink Minor 
Releases&lt;/a&gt;        &lt;ul&gt;
+          &lt;li&gt;&lt;a href=&quot;#flink-193&quot; 
id=&quot;markdown-toc-flink-193&quot;&gt;Flink 1.9.3&lt;/a&gt;&lt;/li&gt;
+          &lt;li&gt;&lt;a href=&quot;#flink-1101&quot; 
id=&quot;markdown-toc-flink-1101&quot;&gt;Flink 1.10.1&lt;/a&gt;&lt;/li&gt;
+        &lt;/ul&gt;
+      &lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#new-committers-and-pmc-members&quot; 
id=&quot;markdown-toc-new-committers-and-pmc-members&quot;&gt;New Committers 
and PMC Members&lt;/a&gt;        &lt;ul&gt;
+          &lt;li&gt;&lt;a href=&quot;#new-pmc-members&quot; 
id=&quot;markdown-toc-new-pmc-members&quot;&gt;New PMC 
Members&lt;/a&gt;&lt;/li&gt;
+          &lt;li&gt;&lt;a href=&quot;#new-committers&quot; 
id=&quot;markdown-toc-new-committers&quot;&gt;New 
Committers&lt;/a&gt;&lt;/li&gt;
+        &lt;/ul&gt;
+      &lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#the-bigger-picture&quot; 
id=&quot;markdown-toc-the-bigger-picture&quot;&gt;The Bigger Picture&lt;/a&gt;  
  &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#a-new-self-paced-apache-flink-training&quot; 
id=&quot;markdown-toc-a-new-self-paced-apache-flink-training&quot;&gt;A new 
self-paced Apache Flink training&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#google-season-of-docs-2020&quot; 
id=&quot;markdown-toc-google-season-of-docs-2020&quot;&gt;Google Season of Docs 
2020&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#and-something-to-read&quot; 
id=&quot;markdown-toc-and-something-to-read&quot;&gt;…and something to 
read!&lt;/a&gt;&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;/div&gt;
+
+&lt;h1 id=&quot;the-past-month-in-flink&quot;&gt;The Past Month in 
Flink&lt;/h1&gt;
+
+&lt;h2 id=&quot;flink-stateful-functions-20-is-out&quot;&gt;Flink Stateful 
Functions 2.0 is out!&lt;/h2&gt;
+
+&lt;p&gt;In the beginning of April, the Flink community announced the &lt;a 
href=&quot;https://flink.apache.org/news/2020/04/07/release-statefun-2.0.0.html&quot;&gt;release
 of Stateful Functions 2.0&lt;/a&gt; — the first as part of the Apache Flink 
project. From this release, you can use Flink as the base of a (stateful) 
serverless platform with out-of-the-box consistent and scalable state, and 
efficient messaging between functions. You can even run your stateful functions 
on platforms l [...]
+
+&lt;div style=&quot;line-height:60%;&quot;&gt;
+    &lt;br /&gt;
+&lt;/div&gt;
+
+&lt;center&gt;
+&lt;img 
src=&quot;/img/blog/2020-05-06-community-update/2020-05-06-community-update_2.png&quot;
 width=&quot;550px&quot; alt=&quot;Stateful Functions&quot; /&gt;
+&lt;/center&gt;
+
+&lt;div style=&quot;line-height:60%;&quot;&gt;
+    &lt;br /&gt;
+&lt;/div&gt;
+
+&lt;p&gt;It’s been encouraging to see so many questions about Stateful 
Functions popping up in the &lt;a 
href=&quot;https://lists.apache.org/list.html?u...@flink.apache.org:lte=3M:statefun&quot;&gt;mailing
 list&lt;/a&gt; and Stack Overflow! If you’d like to get involved, we’re always 
&lt;a 
href=&quot;https://github.com/apache/flink-statefun#contributing&quot;&gt;looking
 for new contributors&lt;/a&gt; — especially around SDKs for other languages 
like Go, Javascript and Rust.&lt;/p&gt;
+
+&lt;hr /&gt;
+
+&lt;h2 id=&quot;warming-up-for-flink-111&quot;&gt;Warming up for Flink 
1.11&lt;/h2&gt;
+
+&lt;p&gt;The final preparations for the release of Flink 1.11 are well 
underway, with the feature freeze scheduled for May 15th, and there’s a lot of 
new features and improvements to look out for:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;On the &lt;strong&gt;usability&lt;/strong&gt; side, you can 
expect a big focus on smoothing data ingestion with contributions like support 
for Change Data Capture (CDC) in the Table API/SQL (&lt;a 
href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-105%3A+Support+to+Interpret+and+Emit+Changelog+in+Flink+SQL&quot;&gt;FLIP-105&lt;/a&gt;),
 easy streaming data ingestion into Apache Hive (&lt;a 
href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-115%3A [...]
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;On the &lt;strong&gt;operational&lt;/strong&gt; side, the much 
anticipated new Source API (&lt;a 
href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface&quot;&gt;FLIP-27&lt;/a&gt;)
 will unify batch and streaming sources, and improve out-of-the-box event-time 
behavior; while unaligned checkpoints (&lt;a 
href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-76%3A+Unaligned+Checkpoints&quot;&gt;FLIP-76&lt;/a&gt;)
 and changes [...]
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;p&gt;Throw into the mix improvements around type systems, the WebUI, 
metrics reporting, supported formats and…we can’t wait! To get an overview of 
the ongoing developments, have a look at &lt;a 
href=&quot;http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Development-progress-of-Apache-Flink-1-11-tp40718.html&quot;&gt;this
 thread&lt;/a&gt;. We encourage the community to get involved in testing once 
an RC (Release Candidate) is out. Keep an eye on the &lt;a href=& [...]
+
+&lt;hr /&gt;
+
+&lt;h2 id=&quot;flink-minor-releases&quot;&gt;Flink Minor Releases&lt;/h2&gt;
+
+&lt;h3 id=&quot;flink-193&quot;&gt;Flink 1.9.3&lt;/h3&gt;
+
+&lt;p&gt;The community released Flink 1.9.3, covering some outstanding bugs 
from Flink 1.9! You can find more in the &lt;a 
href=&quot;(https://flink.apache.org/news/2020/04/24/release-1.9.3.html)&quot;&gt;announcement
 blogpost&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;flink-1101&quot;&gt;Flink 1.10.1&lt;/h3&gt;
+
+&lt;p&gt;Also in the pipeline is the release of Flink 1.10.1, already in the 
&lt;a 
href=&quot;http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-1-10-1-release-candidate-2-td41019.html&quot;&gt;RC
 voting&lt;/a&gt; phase. So, you can expect Flink 1.10.1 to be released 
soon!&lt;/p&gt;
+
+&lt;hr /&gt;
+
+&lt;h2 id=&quot;new-committers-and-pmc-members&quot;&gt;New Committers and PMC 
Members&lt;/h2&gt;
+
+&lt;p&gt;The Apache Flink community has welcomed &lt;strong&gt;3 PMC 
Members&lt;/strong&gt; and &lt;strong&gt;2 new Committers&lt;/strong&gt; since 
the last update. Congratulations!&lt;/p&gt;
+
+&lt;h3 id=&quot;new-pmc-members&quot;&gt;New PMC Members&lt;/h3&gt;
+
+&lt;div class=&quot;row&quot;&gt;
+  &lt;div class=&quot;col-lg-3&quot;&gt;
+    &lt;div class=&quot;text-center&quot;&gt;
+      &lt;img class=&quot;img-circle&quot; 
src=&quot;https://avatars2.githubusercontent.com/u/6242259?s=400&amp;amp;u=6e39f4fdbabc8ce4ccde9125166f791957d3ae80&amp;amp;v=4&quot;
 width=&quot;90&quot; height=&quot;90&quot; /&gt;
+      &lt;p&gt;&lt;a href=&quot;https://twitter.com/dwysakowicz&quot;&gt;Dawid 
Wysakowicz&lt;/a&gt;&lt;/p&gt;
+    &lt;/div&gt;
+  &lt;/div&gt;
+  &lt;div class=&quot;col-lg-3&quot;&gt;
+    &lt;div class=&quot;text-center&quot;&gt;
+      &lt;img class=&quot;img-circle&quot; 
src=&quot;https://avatars1.githubusercontent.com/u/4971479?s=400&amp;amp;u=49d4f217e26186606ab13a17a23a038b62b86682&amp;amp;v=4&quot;
 width=&quot;90&quot; height=&quot;90&quot; /&gt;
+      &lt;p&gt;&lt;a href=&quot;https://twitter.com/HequnC&quot;&gt;Hequn 
Cheng&lt;/a&gt;&lt;/p&gt;
+    &lt;/div&gt;
+  &lt;/div&gt;
+  &lt;div class=&quot;col-lg-3&quot;&gt;
+    &lt;div class=&quot;text-center&quot;&gt;
+      &lt;img class=&quot;img-circle&quot; 
src=&quot;https://avatars3.githubusercontent.com/u/12387855?s=400&amp;amp;u=37edbfccb6908541f359433f420f9f1bc25bc714&amp;amp;v=4&quot;
 width=&quot;90&quot; height=&quot;90&quot; /&gt;
+      &lt;p&gt;Zhijiang Wang&lt;/p&gt;
+    &lt;/div&gt;
+  &lt;/div&gt;
+&lt;/div&gt;
+
+&lt;h3 id=&quot;new-committers&quot;&gt;New Committers&lt;/h3&gt;
+
+&lt;div class=&quot;row&quot;&gt;
+  &lt;div class=&quot;col-lg-3&quot;&gt;
+    &lt;div class=&quot;text-center&quot;&gt;
+      &lt;img class=&quot;img-circle&quot; 
src=&quot;https://avatars3.githubusercontent.com/u/11538663?s=400&amp;amp;u=f4643f1981e2a8f8a1962c34511b0d32a31d9502&amp;amp;v=4&quot;
 width=&quot;90&quot; height=&quot;90&quot; /&gt;
+      &lt;p&gt;&lt;a 
href=&quot;https://twitter.com/snntrable&quot;&gt;Konstantin 
Knauf&lt;/a&gt;&lt;/p&gt;
+    &lt;/div&gt;
+  &lt;/div&gt;
+  &lt;div class=&quot;col-lg-3&quot;&gt;
+    &lt;div class=&quot;text-center&quot;&gt;
+      &lt;img class=&quot;img-circle&quot; 
src=&quot;https://avatars1.githubusercontent.com/u/1891970?s=400&amp;amp;u=b7718355ceb1f4a8d1e554c3ae7221e2f32cc8e0&amp;amp;v=4&quot;
 width=&quot;90&quot; height=&quot;90&quot; /&gt;
+      &lt;p&gt;&lt;a href=&quot;https://twitter.com/sjwiesman&quot;&gt;Seth 
Wiesman&lt;/a&gt;&lt;/p&gt;
+    &lt;/div&gt;
+  &lt;/div&gt;
+&lt;/div&gt;
+
+&lt;hr /&gt;
+
+&lt;h1 id=&quot;the-bigger-picture&quot;&gt;The Bigger Picture&lt;/h1&gt;
+
+&lt;h2 id=&quot;a-new-self-paced-apache-flink-training&quot;&gt;A new 
self-paced Apache Flink training&lt;/h2&gt;
+
+&lt;div style=&quot;line-height:60%;&quot;&gt;
+    &lt;br /&gt;
+&lt;/div&gt;
+
+&lt;p&gt;This week, the Flink website received the invaluable contribution of 
a self-paced training course curated by David (&lt;a 
href=&quot;https://twitter.com/alpinegizmo&quot;&gt;@alpinegizmo&lt;/a&gt;) — 
or, what used to be the entire training materials under &lt;a 
href=&quot;training.ververica.com&quot;&gt;training.ververica.com&lt;/a&gt;. 
The new materials guide you through the very basics of Flink and the DataStream 
API, and round off every concepts section with hands-on exercise [...]
+
+&lt;div style=&quot;line-height:60%;&quot;&gt;
+    &lt;br /&gt;
+&lt;/div&gt;
+
+&lt;center&gt;
+&lt;img 
src=&quot;/img/blog/2020-05-06-community-update/2020-05-06-community-update_1.png&quot;
 width=&quot;1000px&quot; alt=&quot;Self-paced Flink Training&quot; /&gt;
+&lt;/center&gt;
+
+&lt;div style=&quot;line-height:140%;&quot;&gt;
+    &lt;br /&gt;
+&lt;/div&gt;
+
+&lt;p&gt;Whether you’re new to Flink or just looking to strengthen your 
foundations, this training is the most comprehensive way to get started and is 
now completely open source: &lt;a 
href=&quot;https://flink.apache.org/training.html&quot;&gt;https://flink.apache.org/training.html&lt;/a&gt;.
 For now, the materials are only available in English, but the community 
intends to also provide a Chinese translation in the future.&lt;/p&gt;
+
+&lt;hr /&gt;
+
+&lt;h2 id=&quot;google-season-of-docs-2020&quot;&gt;Google Season of Docs 
2020&lt;/h2&gt;
+
+&lt;p&gt;Google Season of Docs (GSOD) is a great initiative organized by &lt;a 
href=&quot;https://opensource.google.com/&quot;&gt;Google Open Source&lt;/a&gt; 
to pair technical writers with mentors to work on documentation for open source 
projects. Last year, the Flink community submitted &lt;a 
href=&quot;https://flink.apache.org/news/2019/04/17/sod.html&quot;&gt;an 
application&lt;/a&gt; that unfortunately didn’t make the cut — but we are 
trying again! This time, with a project idea to i [...]
+
+&lt;p&gt;&lt;strong&gt;1) Restructure the Table API &amp;amp; SQL 
Documentation&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;Reworking the current documentation structure would allow 
to:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;Lower the entry barrier to Flink for non-programmatic (i.e. SQL) 
users.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Make the available features more easily discoverable.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Improve the flow and logical correlation of topics.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;p&gt;&lt;a 
href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=127405685&quot;&gt;FLIP-60&lt;/a&gt;
 contains a detailed proposal on how to reorganize the existing documentation, 
which can be used as a starting point.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;2) Extend the Table API &amp;amp; SQL 
Documentation&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;Some areas of the documentation have insufficient detail or are not 
&lt;a 
href=&quot;https://flink.apache.org/contributing/docs-style.html#general-guiding-principles&quot;&gt;accessible&lt;/a&gt;
 for new Flink users. Examples of topics and sections that require attention 
are: planners, built-in functions, connectors, overview and concepts sections. 
There is a lot of work to be done and the technical writer could choose what 
areas to focus on — these improvements could then be ad [...]
+
+&lt;p&gt;If you’re interested in learning more about this project idea or want 
to get involved in GSoD as a technical writer, check out the &lt;a 
href=&quot;https://flink.apache.org/news/2020/05/04/season-of-docs.html&quot;&gt;announcement
 blogpost&lt;/a&gt;.&lt;/p&gt;
+
+&lt;hr /&gt;
+
+&lt;h1 id=&quot;and-something-to-read&quot;&gt;…and something to 
read!&lt;/h1&gt;
+
+&lt;p&gt;Events across the globe have pretty much come to a halt, so we’ll 
leave you with some interesting resources to read and explore instead. In 
addition to this written content, you can also recap the sessions from the 
&lt;a 
href=&quot;https://www.youtube.com/playlist?list=PLDX4T_cnKjD0ngnBSU-bYGfgVv17MiwA7&quot;&gt;Flink
 Forward Virtual Conference&lt;/a&gt;!&lt;/p&gt;
+
+&lt;table class=&quot;table table-bordered&quot;&gt;
+  &lt;thead&gt;
+    &lt;tr&gt;
+      &lt;th&gt;Type&lt;/th&gt;
+      &lt;th&gt;Links&lt;/th&gt;
+    &lt;/tr&gt;
+  &lt;/thead&gt;
+  &lt;tbody&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;span class=&quot;glyphicon glyphicon 
glyphicon-bookmark&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; 
Blogposts&lt;/td&gt;
+      &lt;td&gt;&lt;ul&gt;
+                 &lt;li&gt;&lt;a 
href=&quot;https://medium.com/@abdelkrim.hadjidj/event-driven-supply-chain-for-crisis-with-flinksql-be80cb3ad4f9&quot;&gt;Event-Driven
 Supply Chain for Crisis with FlinkSQL and Zeppelin&lt;/a&gt;&lt;/li&gt;
+                 &lt;/ul&gt;
+                 &lt;ul&gt;
+                 &lt;li&gt;&lt;a 
href=&quot;https://flink.apache.org/news/2020/04/21/memory-management-improvements-flink-1.10.html&quot;&gt;Memory
 Management Improvements with Apache Flink 1.10&lt;/a&gt;&lt;/li&gt;
+                 &lt;li&gt;&lt;a 
href=&quot;https://flink.apache.org/news/2020/04/15/flink-serialization-tuning-vol-1.html&quot;&gt;Flink
 Serialization Tuning Vol. 1: Choosing your Serializer — if you 
can&lt;/a&gt;&lt;/li&gt;
+               &lt;/ul&gt;
+         &lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;span class=&quot;glyphicon glyphicon-console&quot; 
aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Tutorials&lt;/td&gt;
+      &lt;td&gt;&lt;ul&gt;
+         &lt;li&gt;&lt;a 
href=&quot;https://flink.apache.org/2020/04/09/pyflink-udf-support-flink.html&quot;&gt;PyFlink:
 Introducing Python Support for UDFs in Flink&#39;s Table 
API&lt;/a&gt;&lt;/li&gt;
+         &lt;li&gt;&lt;a 
href=&quot;https://dev.to/morsapaes/flink-stateful-functions-where-to-start-2j39&quot;&gt;Flink
 Stateful Functions: where to start?&lt;/a&gt;&lt;/li&gt;
+                 &lt;/ul&gt;
+         &lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;span class=&quot;glyphicon glyphicon 
glyphicon-certificate&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Flink 
Packages&lt;/td&gt;
+      &lt;td&gt;&lt;ul&gt;&lt;p&gt;&lt;a 
href=&quot;https://flink-packages.org/&quot;&gt;Flink Packages&lt;/a&gt; is a 
website where you can explore (and contribute to) the Flink &lt;br /&gt; 
ecosystem of connectors, extensions, APIs, tools and integrations. &lt;b&gt;New 
in:&lt;/b&gt; &lt;/p&gt;
+         &lt;li&gt;&lt;a 
href=&quot;https://flink-packages.org/packages/spillable-state-backend-for-flink&quot;&gt;Spillable
 State Backend for Flink&lt;/a&gt;&lt;/li&gt;
+                 &lt;li&gt;&lt;a 
href=&quot;https://flink-packages.org/packages/flink-memory-calculator&quot;&gt;Flink
 Memory Calculator&lt;/a&gt;&lt;/li&gt;
+                 &lt;li&gt;&lt;a 
href=&quot;https://flink-packages.org/packages/ververica-platform-community-edition&quot;&gt;Ververica
 Platform Community Edition&lt;/a&gt;&lt;/li&gt;
+                 &lt;/ul&gt;
+         &lt;/td&gt;
+    &lt;/tr&gt;
+  &lt;/tbody&gt;
+&lt;/table&gt;
+
+&lt;p&gt;If you’d like to keep a closer eye on what’s happening in the 
community, subscribe to the Flink &lt;a 
href=&quot;https://flink.apache.org/community.html#mailing-lists&quot;&gt;@community
 mailing list&lt;/a&gt; to get fine-grained weekly updates, upcoming event 
announcements and more.&lt;/p&gt;
+</description>
+<pubDate>Thu, 07 May 2020 10:00:00 +0200</pubDate>
+<link>https://flink.apache.org/news/2020/05/07/community-update.html</link>
+<guid isPermaLink="true">/news/2020/05/07/community-update.html</guid>
+</item>
+
+<item>
 <title>Applying to Google Season of Docs 2020</title>
 <description>&lt;p&gt;The Flink community is thrilled to share that the 
project is applying again to &lt;a 
href=&quot;https://developers.google.com/season-of-docs/&quot;&gt;Google Season 
of Docs&lt;/a&gt; (GSoD) this year! If you’re unfamiliar with the program, GSoD 
is a great initiative organized by &lt;a 
href=&quot;https://opensource.google.com/&quot;&gt;Google Open Source&lt;/a&gt; 
to pair technical writers with mentors to work on documentation for open source 
projects. The &lt;a href [...]
 
@@ -28,6 +651,12 @@
 
 &lt;p&gt;If working shoulder to shoulder with the Flink community on 
documentation sounds exciting, we’d love to hear from you! You can read more 
about our idea for this year’s project below and, depending on whether it is 
accepted, &lt;a 
href=&quot;https://developers.google.com/season-of-docs/docs/tech-writer-guide&quot;&gt;apply&lt;/a&gt;
 as a technical writer. If you have any questions or just want to know more 
about the project idea, ping us at &lt;a href=&quot;https://flink.apache.o [...]
 
+&lt;div class=&quot;alert alert-info&quot;&gt;
+       Please &lt;a 
href=&quot;mailto:dev-subscr...@flink.apache.org&quot;&gt;subscribe&lt;/a&gt; 
to the Apache Flink mailing list before reaching out.
+       If you are not subscribed then responses to your message will not go 
through.
+       You can always &lt;a 
href=&quot;mailto:dev-unsubscr...@flink.apache.org&quot;&gt;unsubscribe&lt;/a&gt;
 at any time. 
+&lt;/div&gt;
+
 &lt;h2 
id=&quot;project-improve-the-table-api--sql-documentation&quot;&gt;Project: 
Improve the Table API &amp;amp; SQL Documentation&lt;/h2&gt;
 
 &lt;p&gt;&lt;a href=&quot;https://flink.apache.org/&quot;&gt;Apache 
Flink&lt;/a&gt; is a stateful stream processor supporting a broad set of use 
cases and featuring APIs at different levels of abstraction that allow users to 
trade off expressiveness and usability, as well as work with their language of 
choice (Java/Scala, SQL or Python). The Table API &amp;amp; SQL are Flink’s 
high-level relational abstractions and focus on data analytics use cases. A 
core principle is that either API ca [...]
@@ -16543,259 +17172,5 @@ Improve usability of command line interface&lt;/p&gt;
 <guid isPermaLink="true">/news/2015/04/13/release-0.9.0-milestone1.html</guid>
 </item>
 
-<item>
-<title>March 2015 in the Flink community</title>
-<description>&lt;p&gt;March has been a busy month in the Flink 
community.&lt;/p&gt;
-
-&lt;h3 id=&quot;scaling-als&quot;&gt;Scaling ALS&lt;/h3&gt;
-
-&lt;p&gt;Flink committers employed at &lt;a 
href=&quot;http://data-artisans.com&quot;&gt;data Artisans&lt;/a&gt; published 
a &lt;a 
href=&quot;http://data-artisans.com/how-to-factorize-a-700-gb-matrix-with-apache-flink/&quot;&gt;blog
 post&lt;/a&gt; on how they scaled matrix factorization with Flink and Google 
Compute Engine to matrices with 28 billion elements.&lt;/p&gt;
-
-&lt;h3 id=&quot;learn-about-the-internals-of-flink&quot;&gt;Learn about the 
internals of Flink&lt;/h3&gt;
-
-&lt;p&gt;The community has started an effort to better document the internals
-of Flink. Check out the first articles on the Flink wiki on &lt;a 
href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=53741525&quot;&gt;how
 Flink
-manages
-memory&lt;/a&gt;,
-&lt;a 
href=&quot;https://cwiki.apache.org/confluence/display/FLINK/Data+exchange+between+tasks&quot;&gt;how
 tasks in Flink exchange
-data&lt;/a&gt;,
-&lt;a 
href=&quot;https://cwiki.apache.org/confluence/display/FLINK/Type+System%2C+Type+Extraction%2C+Serialization&quot;&gt;type
 extraction and serialization in
-Flink&lt;/a&gt;,
-as well as &lt;a 
href=&quot;https://cwiki.apache.org/confluence/display/FLINK/Akka+and+Actors&quot;&gt;how
 Flink builds on Akka for distributed
-coordination&lt;/a&gt;.&lt;/p&gt;
-
-&lt;p&gt;Check out also the &lt;a 
href=&quot;http://flink.apache.org/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html&quot;&gt;new
 blog
-post&lt;/a&gt;
-on how Flink executes joins with several insights into Flink’s 
runtime.&lt;/p&gt;
-
-&lt;h3 id=&quot;meetups-and-talks&quot;&gt;Meetups and talks&lt;/h3&gt;
-
-&lt;p&gt;Flink’s machine learning efforts were presented at the &lt;a 
href=&quot;http://www.meetup.com/Machine-Learning-Stockholm/events/221144997/&quot;&gt;Machine
-Learning Stockholm meetup
-group&lt;/a&gt;. The
-regular Berlin Flink meetup featured a talk on the past, present, and
-future of Flink. The talk is available on
-&lt;a 
href=&quot;https://www.youtube.com/watch?v=fw2DBE6ZiEQ&amp;amp;feature=youtu.be&quot;&gt;youtube&lt;/a&gt;.&lt;/p&gt;
-
-&lt;h2 id=&quot;in-the-flink-master&quot;&gt;In the Flink master&lt;/h2&gt;
-
-&lt;h3 id=&quot;table-api-in-scala-and-java&quot;&gt;Table API in Scala and 
Java&lt;/h3&gt;
-
-&lt;p&gt;The new &lt;a 
href=&quot;https://github.com/apache/flink/tree/master/flink-libraries/flink-table&quot;&gt;Table
-API&lt;/a&gt;
-in Flink is now available in both Java and Scala. Check out the
-examples &lt;a 
href=&quot;https://github.com/apache/flink/blob/master/flink-libraries/flink-table/src/main/java/org/apache/flink/examples/java/JavaTableExample.java&quot;&gt;here
 (Java)&lt;/a&gt; and &lt;a 
href=&quot;https://github.com/apache/flink/tree/master/flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala&quot;&gt;here
 (Scala)&lt;/a&gt;.&lt;/p&gt;
-
-&lt;h3 id=&quot;additions-to-the-machine-learning-library&quot;&gt;Additions 
to the Machine Learning library&lt;/h3&gt;
-
-&lt;p&gt;Flink’s &lt;a 
href=&quot;https://github.com/apache/flink/tree/master/flink-libraries/flink-ml&quot;&gt;Machine
 Learning
-library&lt;/a&gt;
-is seeing quite a bit of traction. Recent additions include the &lt;a 
href=&quot;http://arxiv.org/abs/1409.1458&quot;&gt;CoCoA
-algorithm&lt;/a&gt; for distributed
-optimization.&lt;/p&gt;
-
-&lt;h3 
id=&quot;exactly-once-delivery-guarantees-for-streaming-jobs&quot;&gt;Exactly-once
 delivery guarantees for streaming jobs&lt;/h3&gt;
-
-&lt;p&gt;Flink streaming jobs now provide exactly once processing guarantees
-when coupled with persistent sources (notably &lt;a 
href=&quot;http://kafka.apache.org&quot;&gt;Apache
-Kafka&lt;/a&gt;). Flink periodically checkpoints and
-persists the offsets of the sources and restarts from those
-checkpoints at failure recovery. This functionality is currently
-limited in that it does not yet handle large state and iterative
-programs.&lt;/p&gt;
-
-</description>
-<pubDate>Tue, 07 Apr 2015 12:00:00 +0200</pubDate>
-<link>https://flink.apache.org/news/2015/04/07/march-in-flink.html</link>
-<guid isPermaLink="true">/news/2015/04/07/march-in-flink.html</guid>
-</item>
-
-<item>
-<title>Peeking into Apache Flink&#39;s Engine Room</title>
-<description>&lt;h3 id=&quot;join-processing-in-apache-flink&quot;&gt;Join 
Processing in Apache Flink&lt;/h3&gt;
-
-&lt;p&gt;Joins are prevalent operations in many data processing applications. 
Most data processing systems feature APIs that make joining data sets very 
easy. However, the internal algorithms for join processing are much more 
involved – especially if large data sets need to be efficiently handled. 
Therefore, join processing serves as a good example to discuss the salient 
design points and implementation details of a data processing system.&lt;/p&gt;
-
-&lt;p&gt;In this blog post, we cut through Apache Flink’s layered architecture 
and take a look at its internals with a focus on how it handles joins. 
Specifically, I will&lt;/p&gt;
-
-&lt;ul&gt;
-  &lt;li&gt;show how easy it is to join data sets using Flink’s fluent 
APIs,&lt;/li&gt;
-  &lt;li&gt;discuss basic distributed join strategies, Flink’s join 
implementations, and its memory management,&lt;/li&gt;
-  &lt;li&gt;talk about Flink’s optimizer that automatically chooses join 
strategies,&lt;/li&gt;
-  &lt;li&gt;show some performance numbers for joining data sets of different 
sizes, and finally&lt;/li&gt;
-  &lt;li&gt;briefly discuss joining of co-located and pre-sorted data 
sets.&lt;/li&gt;
-&lt;/ul&gt;
-
-&lt;p&gt;&lt;em&gt;Disclaimer&lt;/em&gt;: This blog post is exclusively about 
equi-joins. Whenever I say “join” in the following, I actually mean 
“equi-join”.&lt;/p&gt;
-
-&lt;h3 id=&quot;how-do-i-join-with-flink&quot;&gt;How do I join with 
Flink?&lt;/h3&gt;
-
-&lt;p&gt;Flink provides fluent APIs in Java and Scala to write data flow 
programs. Flink’s APIs are centered around parallel data collections which are 
called data sets. data sets are processed by applying Transformations that 
compute new data sets. Flink’s transformations include Map and Reduce as known 
from MapReduce &lt;a 
href=&quot;http://research.google.com/archive/mapreduce.html&quot;&gt;[1]&lt;/a&gt;
 but also operators for joining, co-grouping, and iterative processing. The 
docume [...]
-
-&lt;p&gt;Joining two Scala case class data sets is very easy as the following 
example shows:&lt;/p&gt;
-
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code 
class=&quot;language-scala&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// define 
your data types&lt;/span&gt;
-&lt;span class=&quot;k&quot;&gt;case&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span 
class=&quot;nc&quot;&gt;PageVisit&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;n&quot;&gt;url&lt;/span&gt;&lt;span 
class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span 
class=&quot;kt&quot;&gt;String&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;ip&lt;/span&gt;&lt;span 
class=&quot;k&quot;&gt;:&lt;/span&gt; &l [...]
-&lt;span class=&quot;k&quot;&gt;case&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span 
class=&quot;nc&quot;&gt;User&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;n&quot;&gt;id&lt;/span&gt;&lt;span 
class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span 
class=&quot;kt&quot;&gt;Long&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;name&lt;/span&gt;&lt;span 
class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span [...]
-
-&lt;span class=&quot;c1&quot;&gt;// get your data from somewhere&lt;/span&gt;
-&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;visits&lt;/span&gt;&lt;span 
class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span 
class=&quot;kt&quot;&gt;DataSet&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;[&lt;/span&gt;&lt;span 
class=&quot;kt&quot;&gt;PageVisit&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;]&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;o&quot;&gt;...&lt;/span&gt;
-&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;users&lt;/span&gt;&lt;span 
class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span 
class=&quot;kt&quot;&gt;DataSet&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;[&lt;/span&gt;&lt;span 
class=&quot;kt&quot;&gt;User&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;]&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;o&quot;&gt;...&lt;/span&gt;
-
-&lt;span class=&quot;c1&quot;&gt;// filter the users data set&lt;/span&gt;
-&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;germanUsers&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;users&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span 
class=&quot;n&quot;&gt;filter&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;((&lt;/span&gt;&lt;span 
class=&quot;n&quot;&gt;u&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;=&amp;gt;&lt;/span&g [...]
-&lt;span class=&quot;c1&quot;&gt;// join data sets&lt;/span&gt;
-&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;germanVisits&lt;/span&gt;&lt;span 
class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span 
class=&quot;kt&quot;&gt;DataSet&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;[(&lt;/span&gt;&lt;span 
class=&quot;kt&quot;&gt;PageVisit&lt;/span&gt;, &lt;span 
class=&quot;kt&quot;&gt;User&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;)]&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;=&lt;/span&gt;
-      &lt;span class=&quot;c1&quot;&gt;// equi-join condition 
(PageVisit.userId = User.id)&lt;/span&gt;
-     &lt;span class=&quot;n&quot;&gt;visits&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span 
class=&quot;n&quot;&gt;join&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;n&quot;&gt;germanUsers&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span 
class=&quot;n&quot;&gt;where&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;s&quot;&gt;&amp;quot;userId&amp;quot;&lt;/span&gt;&lt;span 
class=&quot;o&qu [...]
-
-&lt;p&gt;Flink’s APIs also allow to:&lt;/p&gt;
-
-&lt;ul&gt;
-  &lt;li&gt;apply a user-defined join function to each pair of joined elements 
instead returning a &lt;code&gt;($Left, $Right)&lt;/code&gt; tuple,&lt;/li&gt;
-  &lt;li&gt;select fields of pairs of joined Tuple elements (projection), 
and&lt;/li&gt;
-  &lt;li&gt;define composite join keys such as &lt;code&gt;.where(“orderDate”, 
“zipCode”).equalTo(“date”, “zip”)&lt;/code&gt;.&lt;/li&gt;
-&lt;/ul&gt;
-
-&lt;p&gt;See the documentation for more details on Flink’s join features &lt;a 
href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html#join&quot;&gt;[3]&lt;/a&gt;.&lt;/p&gt;
-
-&lt;h3 id=&quot;how-does-flink-join-my-data&quot;&gt;How does Flink join my 
data?&lt;/h3&gt;
-
-&lt;p&gt;Flink uses techniques which are well known from parallel database 
systems to efficiently execute parallel joins. A join operator must establish 
all pairs of elements from its input data sets for which the join condition 
evaluates to true. In a standalone system, the most straight-forward 
implementation of a join is the so-called nested-loop join which builds the 
full Cartesian product and evaluates the join condition for each pair of 
elements. This strategy has quadratic complex [...]
-
-&lt;p&gt;In a distributed system joins are commonly processed in two 
steps:&lt;/p&gt;
-
-&lt;ol&gt;
-  &lt;li&gt;The data of both inputs is distributed across all parallel 
instances that participate in the join and&lt;/li&gt;
-  &lt;li&gt;each parallel instance performs a standard stand-alone join 
algorithm on its local partition of the overall data.&lt;/li&gt;
-&lt;/ol&gt;
-
-&lt;p&gt;The distribution of data across parallel instances must ensure that 
each valid join pair can be locally built by exactly one instance. For both 
steps, there are multiple valid strategies that can be independently picked and 
which are favorable in different situations. In Flink terminology, the first 
phase is called Ship Strategy and the second phase Local Strategy. In the 
following I will describe Flink’s ship and local strategies to join two data 
sets &lt;em&gt;R&lt;/em&gt; and [...]
-
-&lt;h4 id=&quot;ship-strategies&quot;&gt;Ship Strategies&lt;/h4&gt;
-&lt;p&gt;Flink features two ship strategies to establish a valid data 
partitioning for a join:&lt;/p&gt;
-
-&lt;ul&gt;
-  &lt;li&gt;the &lt;em&gt;Repartition-Repartition&lt;/em&gt; strategy (RR) 
and&lt;/li&gt;
-  &lt;li&gt;the &lt;em&gt;Broadcast-Forward&lt;/em&gt; strategy 
(BF).&lt;/li&gt;
-&lt;/ul&gt;
-
-&lt;p&gt;The Repartition-Repartition strategy partitions both inputs, R and S, 
on their join key attributes using the same partitioning function. Each 
partition is assigned to exactly one parallel join instance and all data of 
that partition is sent to its associated instance. This ensures that all 
elements that share the same join key are shipped to the same parallel instance 
and can be locally joined. The cost of the RR strategy is a full shuffle of 
both data sets over the network.&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/joins-repartition.png&quot; 
style=&quot;width:90%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;The Broadcast-Forward strategy sends one complete data set (R) to 
each parallel instance that holds a partition of the other data set (S), i.e., 
each parallel instance receives the full data set R. Data set S remains local 
and is not shipped at all. The cost of the BF strategy depends on the size of R 
and the number of parallel instances it is shipped to. The size of S does not 
matter because S is not moved. The figure below illustrates how both ship 
strategies work.&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/joins-broadcast.png&quot; 
style=&quot;width:90%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;The Repartition-Repartition and Broadcast-Forward ship strategies 
establish suitable data distributions to execute a distributed join. Depending 
on the operations that are applied before the join, one or even both inputs of 
a join are already distributed in a suitable way across parallel instances. In 
this case, Flink will reuse such distributions and only ship one or no input at 
all.&lt;/p&gt;
-
-&lt;h4 id=&quot;flinks-memory-management&quot;&gt;Flink’s Memory 
Management&lt;/h4&gt;
-&lt;p&gt;Before delving into the details of Flink’s local join algorithms, I 
will briefly discuss Flink’s internal memory management. Data processing 
algorithms such as joining, grouping, and sorting need to hold portions of 
their input data in memory. While such algorithms perform best if there is 
enough memory available to hold all data, it is crucial to gracefully handle 
situations where the data size exceeds memory. Such situations are especially 
tricky in JVM-based systems such as F [...]
-
-&lt;p&gt;Flink handles this challenge by actively managing its memory. When a 
worker node (TaskManager) is started, it allocates a fixed portion (70% by 
default) of the JVM’s heap memory that is available after initialization as 
32KB byte arrays. These byte arrays are distributed as working memory to all 
algorithms that need to hold significant portions of data in memory. The 
algorithms receive their input data as Java data objects and serialize them 
into their working memory.&lt;/p&gt;
-
-&lt;p&gt;This design has several nice properties. First, the number of data 
objects on the JVM heap is much lower resulting in less garbage collection 
pressure. Second, objects on the heap have a certain space overhead and the 
binary representation is more compact. Especially data sets of many small 
elements benefit from that. Third, an algorithm knows exactly when the input 
data exceeds its working memory and can react by writing some of its filled 
byte arrays to the worker’s local file [...]
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/joins-memmgmt.png&quot; 
style=&quot;width:90%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;This active memory management makes Flink extremely robust for 
processing very large data sets on limited memory resources while preserving 
all benefits of in-memory processing if data is small enough to fit in-memory. 
De/serializing data into and from memory has a certain cost overhead compared 
to simply holding all data elements on the JVM’s heap. However, Flink features 
efficient custom de/serializers which also allow to perform certain operations 
such as comparisons directly [...]
-
-&lt;h4 id=&quot;local-strategies&quot;&gt;Local Strategies&lt;/h4&gt;
-
-&lt;p&gt;After the data has been distributed across all parallel join 
instances using either a Repartition-Repartition or Broadcast-Forward ship 
strategy, each instance runs a local join algorithm to join the elements of its 
local partition. Flink’s runtime features two common join strategies to perform 
these local joins:&lt;/p&gt;
-
-&lt;ul&gt;
-  &lt;li&gt;the &lt;em&gt;Sort-Merge-Join&lt;/em&gt; strategy (SM) 
and&lt;/li&gt;
-  &lt;li&gt;the &lt;em&gt;Hybrid-Hash-Join&lt;/em&gt; strategy (HH).&lt;/li&gt;
-&lt;/ul&gt;
-
-&lt;p&gt;The Sort-Merge-Join works by first sorting both input data sets on 
their join key attributes (Sort Phase) and merging the sorted data sets as a 
second step (Merge Phase). The sort is done in-memory if the local partition of 
a data set is small enough. Otherwise, an external merge-sort is done by 
collecting data until the working memory is filled, sorting it, writing the 
sorted data to the local filesystem, and starting over by filling the working 
memory again with more incoming  [...]
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/joins-smj.png&quot; 
style=&quot;width:90%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;The Hybrid-Hash-Join distinguishes its inputs as build-side and 
probe-side input and works in two phases, a build phase followed by a probe 
phase. In the build phase, the algorithm reads the build-side input and inserts 
all data elements into an in-memory hash table indexed by their join key 
attributes. If the hash table outgrows the algorithm’s working memory, parts of 
the hash table (ranges of hash indexes) are written to the local filesystem. 
The build phase ends after the bu [...]
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/joins-hhj.png&quot; 
style=&quot;width:90%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;h3 id=&quot;how-does-flink-choose-join-strategies&quot;&gt;How does Flink 
choose join strategies?&lt;/h3&gt;
-
-&lt;p&gt;Ship and local strategies do not depend on each other and can be 
independently chosen. Therefore, Flink can execute a join of two data sets R 
and S in nine different ways by combining any of the three ship strategies (RR, 
BF with R being broadcasted, BF with S being broadcasted) with any of the three 
local strategies (SM, HH with R being build-side, HH with S being build-side). 
Each of these strategy combinations results in different execution performance 
depending on the data s [...]
-
-&lt;p&gt;Flink features a cost-based optimizer which automatically chooses the 
execution strategies for all operators including joins. Without going into the 
details of cost-based optimization, this is done by computing cost estimates 
for execution plans with different strategies and picking the plan with the 
least estimated costs. Thereby, the optimizer estimates the amount of data 
which is shipped over the the network and written to disk. If no reliable size 
estimates for the input dat [...]
-
-&lt;h3 id=&quot;how-is-flinks-join-performance&quot;&gt;How is Flink’s join 
performance?&lt;/h3&gt;
-
-&lt;p&gt;Alright, that sounds good, but how fast are joins in Flink? Let’s 
have a look. We start with a benchmark of the single-core performance of 
Flink’s Hybrid-Hash-Join implementation and run a Flink program that executes a 
Hybrid-Hash-Join with parallelism 1. We run the program on a n1-standard-2 
Google Compute Engine instance (2 vCPUs, 7.5GB memory) with two locally 
attached SSDs. We give 4GB as working memory to the join. The join program 
generates 1KB records for both inputs on-t [...]
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/joins-single-perf.png&quot; 
style=&quot;width:85%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;The joins with 1 to 3 GB build side (blue bars) are pure in-memory 
joins. The other joins partially spill data to disk (4 to 12GB, orange bars). 
The results show that the performance of Flink’s Hybrid-Hash-Join remains 
stable as long as the hash table completely fits into memory. As soon as the 
hash table becomes larger than the working memory, parts of the hash table and 
corresponding parts of the probe side are spilled to disk. The chart shows that 
the performance of the Hybri [...]
-
-&lt;p&gt;So, Flink’s Hybrid-Hash-Join implementation performs well on a single 
thread even for limited memory resources, but how good is Flink’s performance 
when joining larger data sets in a distributed setting? For the next experiment 
we compare the performance of the most common join strategy combinations, 
namely:&lt;/p&gt;
-
-&lt;ul&gt;
-  &lt;li&gt;Broadcast-Forward, Hybrid-Hash-Join (broadcasting and building 
with the smaller side),&lt;/li&gt;
-  &lt;li&gt;Repartition, Hybrid-Hash-Join (building with the smaller side), 
and&lt;/li&gt;
-  &lt;li&gt;Repartition, Sort-Merge-Join&lt;/li&gt;
-&lt;/ul&gt;
-
-&lt;p&gt;for different input size ratios:&lt;/p&gt;
-
-&lt;ul&gt;
-  &lt;li&gt;1GB     : 1000GB&lt;/li&gt;
-  &lt;li&gt;10GB    : 1000GB&lt;/li&gt;
-  &lt;li&gt;100GB   : 1000GB&lt;/li&gt;
-  &lt;li&gt;1000GB  : 1000GB&lt;/li&gt;
-&lt;/ul&gt;
-
-&lt;p&gt;The Broadcast-Forward strategy is only executed for up to 10GB. 
Building a hash table from 100GB broadcasted data in 5GB working memory would 
result in spilling proximately 95GB (build input) + 950GB (probe input) in each 
parallel thread and require more than 8TB local disk storage on each 
machine.&lt;/p&gt;
-
-&lt;p&gt;As in the single-core benchmark, we run 1:N joins, generate the data 
on-the-fly, and immediately discard the result after the join. We run the 
benchmark on 10 n1-highmem-8 Google Compute Engine instances. Each instance is 
equipped with 8 cores, 52GB RAM, 40GB of which are configured as working memory 
(5GB per core), and one local SSD for spilling to disk. All benchmarks are 
performed using the same configuration, i.e., no fine tuning for the respective 
data sizes is done. The pr [...]
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/joins-dist-perf.png&quot; 
style=&quot;width:70%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;As expected, the Broadcast-Forward strategy performs best for very 
small inputs because the large probe side is not shipped over the network and 
is locally joined. However, when the size of the broadcasted side grows, two 
problems arise. First the amount of data which is shipped increases but also 
each parallel instance has to process the full broadcasted data set. The 
performance of both Repartitioning strategies behaves similar for growing input 
sizes which indicates that thes [...]
-
-&lt;h3 
id=&quot;ive-got-sooo-much-data-to-join-do-i-really-need-to-ship-it&quot;&gt;I’ve
 got sooo much data to join, do I really need to ship it?&lt;/h3&gt;
-
-&lt;p&gt;We have seen that off-the-shelf distributed joins work really well in 
Flink. But what if your data is so huge that you do not want to shuffle it 
across your cluster? We recently added some features to Flink for specifying 
semantic properties (partitioning and sorting) on input splits and co-located 
reading of local input files. With these tools at hand, it is possible to join 
pre-partitioned data sets from your local filesystem without sending a single 
byte over your cluster’s n [...]
-
-&lt;h3 id=&quot;tldr-what-should-i-remember-from-all-of-this&quot;&gt;tl;dr: 
What should I remember from all of this?&lt;/h3&gt;
-
-&lt;ul&gt;
-  &lt;li&gt;Flink’s fluent Scala and Java APIs make joins and other data 
transformations easy as cake.&lt;/li&gt;
-  &lt;li&gt;The optimizer does the hard choices for you, but gives you control 
in case you know better.&lt;/li&gt;
-  &lt;li&gt;Flink’s join implementations perform very good in-memory and 
gracefully degrade when going to disk.&lt;/li&gt;
-  &lt;li&gt;Due to Flink’s robust memory management, there is no need for job- 
or data-specific memory tuning to avoid a nasty 
&lt;code&gt;OutOfMemoryException&lt;/code&gt;. It just runs 
out-of-the-box.&lt;/li&gt;
-&lt;/ul&gt;
-
-&lt;h4 id=&quot;references&quot;&gt;References&lt;/h4&gt;
-
-&lt;p&gt;[1] &lt;a href=&quot;&quot;&gt;“MapReduce: Simplified data processing 
on large clusters”&lt;/a&gt;, Dean, Ghemawat, 2004 &lt;br /&gt;
-[2] &lt;a 
href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html&quot;&gt;Flink
 0.8.1 documentation: Data Transformations&lt;/a&gt; &lt;br /&gt;
-[3] &lt;a 
href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html#join&quot;&gt;Flink
 0.8.1 documentation: Joins&lt;/a&gt; &lt;br /&gt;
-[4] &lt;a 
href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/index.html#semantic-annotations&quot;&gt;Flink
 1.0 documentation: Semantic annotations&lt;/a&gt; &lt;br /&gt;
-[5] &lt;a 
href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/dataset_transformations.html#join-algorithm-hints&quot;&gt;Flink
 1.0 documentation: Optimizer join hints&lt;/a&gt; &lt;br /&gt;&lt;/p&gt;
-</description>
-<pubDate>Fri, 13 Mar 2015 11:00:00 +0100</pubDate>
-<link>https://flink.apache.org/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html</link>
-<guid 
isPermaLink="true">/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html</guid>
-</item>
-
 </channel>
 </rss>
diff --git a/content/news/2020/05/07/community-update.html 
b/content/news/2020/05/07/community-update.html
index 8af9e16..7583d76 100644
--- a/content/news/2020/05/07/community-update.html
+++ b/content/news/2020/05/07/community-update.html
@@ -295,7 +295,7 @@
   <div class="col-lg-3">
     <div class="text-center">
       <img class="img-circle" 
src="https://avatars1.githubusercontent.com/u/4971479?s=400&amp;u=49d4f217e26186606ab13a17a23a038b62b86682&amp;v=4";
 width="90" height="90" />
-      <p><a href="https://twitter.com/HequnC";>Hequn Chen</a></p>
+      <p><a href="https://twitter.com/HequnC";>Hequn Cheng</a></p>
     </div>
   </div>
   <div class="col-lg-3">

Reply via email to