This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 81406600a87c13c6e9d21cc64f069a25929ba01c
Author: Fabian Hueske <fhue...@apache.org>
AuthorDate: Thu Feb 20 17:03:08 2020 +0100

    Rebuild website
---
 content/blog/feed.xml            | 125 ++++++++++++++
 content/blog/index.html          |  36 ++--
 content/blog/page10/index.html   |  28 +++
 content/blog/page2/index.html    |  36 ++--
 content/blog/page3/index.html    |  38 +++--
 content/blog/page4/index.html    |  40 +++--
 content/blog/page5/index.html    |  40 +++--
 content/blog/page6/index.html    |  40 +++--
 content/blog/page7/index.html    |  40 +++--
 content/blog/page8/index.html    |  39 +++--
 content/blog/page9/index.html    |  42 +++--
 content/index.html               |   6 +-
 content/news/2020/02/20/ddl.html | 358 +++++++++++++++++++++++++++++++++++++++
 content/zh/index.html            |   6 +-
 14 files changed, 735 insertions(+), 139 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 838a6ce..20b395c 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,131 @@
 <atom:link href="https://flink.apache.org/blog/feed.xml"; rel="self" 
type="application/rss+xml" />
 
 <item>
+<title>No Java Required: Configuring Sources and Sinks in SQL</title>
+<description>&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;
+
+&lt;p&gt;The recent &lt;a 
href=&quot;https://flink.apache.org/news/2020/02/11/release-1.10.0.html&quot;&gt;Apache
 Flink 1.10 release&lt;/a&gt; includes many exciting features.
+In particular, it marks the end of the community’s year-long effort to merge 
in the &lt;a 
href=&quot;https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html&quot;&gt;Blink
 SQL contribution&lt;/a&gt; from Alibaba.
+The reason the community chose to spend so much time on the contribution is 
that SQL works.
+It allows Flink to offer a truly unified interface over batch and streaming 
and makes stream processing accessible to a broad audience of developers and 
analysts.
+Best of all, Flink SQL is ANSI-SQL compliant, which means if you’ve ever used 
a database in the past, you already know it&lt;sup 
id=&quot;fnref:1&quot;&gt;&lt;a href=&quot;#fn:1&quot; 
class=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;!&lt;/p&gt;
+
+&lt;p&gt;A lot of work focused on improving runtime performance and 
progressively extending its coverage of the SQL standard.
+Flink now supports the full TPC-DS query set for batch queries, reflecting the 
readiness of its SQL engine to address the needs of modern data warehouse-like 
workloads.
+Its streaming SQL supports an almost equal set of features - those that are 
well defined on a streaming runtime - including &lt;a 
href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/streaming/joins.html&quot;&gt;complex
 joins&lt;/a&gt; and &lt;a 
href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/match_recognize.html&quot;&gt;MATCH_RECOGNIZE&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;As important as this work is, the community also strives to make 
these features generally accessible to the broadest audience possible.
+That is why the Flink community is excited in 1.10 to offer production-ready 
DDL syntax (e.g., &lt;code&gt;CREATE TABLE&lt;/code&gt;, &lt;code&gt;DROP 
TABLE&lt;/code&gt;) and a refactored catalog interface.&lt;/p&gt;
+
+&lt;h1 id=&quot;accessing-your-data-where-it-lives&quot;&gt;Accessing Your 
Data Where It Lives&lt;/h1&gt;
+
+&lt;p&gt;Flink does not store data at rest; it is a compute engine and 
requires other systems to consume input from and write its output.
+Those that have used Flink’s &lt;code&gt;DataStream&lt;/code&gt; API in the 
past will be familiar with connectors that allow for interacting with external 
systems. 
+Flink has a vast connector ecosystem that includes all major message queues, 
filesystems, and databases.&lt;/p&gt;
+
+&lt;div class=&quot;alert alert-info&quot;&gt;
+If your favorite system does not have a connector maintained in the central 
Apache Flink repository, check out the &lt;a 
href=&quot;https://flink-packages.org&quot;&gt;flink packages 
website&lt;/a&gt;, which has a growing number of community-maintained 
components.
+&lt;/div&gt;
+
+&lt;p&gt;While these connectors are battle-tested and production-ready, they 
are written in Java and configured in code, which means they are not amenable 
to pure SQL or Table applications.
+For a holistic SQL experience, not only queries need to be written in SQL, but 
also table definitions.&lt;/p&gt;
+
+&lt;h1 id=&quot;create-table-statements&quot;&gt;CREATE TABLE 
Statements&lt;/h1&gt;
+
+&lt;p&gt;While Flink SQL has long provided table abstractions atop some of 
Flink’s most popular connectors, configurations were not always so 
straightforward.
+Beginning in 1.10, Flink supports defining tables through &lt;code&gt;CREATE 
TABLE&lt;/code&gt; statements.
+With this feature, users can now create logical tables, backed by various 
external systems, in pure SQL.&lt;/p&gt;
+
+&lt;p&gt;By defining tables in SQL, developers can write queries against 
logical schemas that are abstracted away from the underlying physical data 
store. Coupled with Flink SQL’s unified approach to batch and stream 
processing, Flink provides a straight line from discovery to 
production.&lt;/p&gt;
+
+&lt;p&gt;Users can define tables over static data sets, anything from a local 
CSV file to a full-fledged data lake or even Hive.
+Leveraging Flink’s efficient batch processing capabilities, they can perform 
ad-hoc queries searching for exciting insights.
+Once something interesting is identified, businesses can gain real-time and 
continuous insights by merely altering the table so that it is powered by a 
message queue such as Kafka.
+Because Flink guarantees SQL queries have unified semantics over batch and 
streaming, users can be confident that redeploying this query as a continuous 
streaming application over a message queue will output identical 
results.&lt;/p&gt;
+
+&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code 
class=&quot;language-sql&quot; data-lang=&quot;sql&quot;&gt;&lt;span 
class=&quot;c1&quot;&gt;-- Define a table called orders that is backed by a 
Kafka topic&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- The definition includes all relevant Kafka 
properties,&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- the underlying format (JSON) and even 
defines a&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- watermarking algorithm based on one of the 
fields&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- so that this table can be used with event 
time.&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;orders&lt;/span&gt; &lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;user_id&lt;/span&gt;    &lt;span 
class=&quot;nb&quot;&gt;BIGINT&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;product&lt;/span&gt;    &lt;span 
class=&quot;n&quot;&gt;STRING&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;order_time&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;TIMESTAMP&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;),&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;WATERMARK&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;FOR&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;order_time&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;AS&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;order_time&lt;/span&gt; &lt;span 
class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;5&amp;#39;&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;SECONDS&lt;/span&gt;
+&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;WITH&lt;/span&gt; &lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;connector.type&amp;#39;&lt;/span&gt;           
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;kafka&amp;#39;&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;connector.version&amp;#39;&lt;/span&gt;        
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;universal&amp;#39;&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;connector.topic&amp;#39;&lt;/span&gt;          
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;orders&amp;#39;&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;connector.startup-mode&amp;#39;&lt;/span&gt; 
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;earliest-offset&amp;#39;&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;connector.properties.bootstrap.servers&amp;#39;&lt;/span&gt;
 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;localhost:9092&amp;#39;&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;format.type&amp;#39;&lt;/span&gt; &lt;span 
class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;json&amp;#39;&lt;/span&gt; 
+&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;-- Define a table called 
product_analysis&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- on top of ElasticSearch 7 where we 
&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- can write the results of our query. 
&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;product_analysis&lt;/span&gt; &lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;product&lt;/span&gt;    &lt;span 
class=&quot;n&quot;&gt;STRING&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;tracking_time&lt;/span&gt;      
&lt;span class=&quot;k&quot;&gt;TIMESTAMP&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;),&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;units_sold&lt;/span&gt;         
&lt;span class=&quot;nb&quot;&gt;BIGINT&lt;/span&gt;
+&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;WITH&lt;/span&gt; &lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;connector.type&amp;#39;&lt;/span&gt;    
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;elasticsearch&amp;#39;&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;connector.version&amp;#39;&lt;/span&gt; 
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;7&amp;#39;&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;connector.hosts&amp;#39;&lt;/span&gt;   
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;localhost:9200&amp;#39;&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;connector.index&amp;#39;&lt;/span&gt;   
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;ProductAnalysis&amp;#39;&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;connector.document.type&amp;#39;&lt;/span&gt; 
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;analysis&amp;#39;&lt;/span&gt; 
+&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;-- A simple query that analyzes order 
data&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- from Kafka and writes results into 
&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- ElasticSearch. &lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;INSERT&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;INTO&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;product_analysis&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;SELECT&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;product_id&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;TUMBLE_START&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;n&quot;&gt;order_time&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span 
class=&quot;nb&quot;&gt;INTERVAL&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;1&amp;#39;&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;DAY&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;n [...]
+       &lt;span class=&quot;k&quot;&gt;COUNT&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;as&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;units_sold&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;orders&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;GROUP&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;BY&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;product_id&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;TUMBLE&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;n&quot;&gt;order_time&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span 
class=&quot;nb&quot;&gt;INTERVAL&lt;/span&gt; &lt;span 
class=&quot;s1&quot;&gt;&amp;#39;1&amp;#39;&lt;/span&gt; &lt;span 
class=&quot;k&quot;&gt;DAY&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;
+
+&lt;h1 id=&quot;catalogs&quot;&gt;Catalogs&lt;/h1&gt;
+
+&lt;p&gt;While being able to create tables is important, it often isn’t enough.
+A business analyst, for example, shouldn’t have to know what properties to set 
for Kafka, or even have to know what the underlying data source is, to be able 
to write a query.&lt;/p&gt;
+
+&lt;p&gt;To solve this problem, Flink 1.10 also ships with a revamped catalog 
system for managing metadata about tables and user definined functions.
+With catalogs, users can create tables once and reuse them across Jobs and 
Sessions.
+Now, the team managing a data set can create a table and immediately make it 
accessible to other groups within their organization.&lt;/p&gt;
+
+&lt;p&gt;The most notable catalog that Flink integrates with today is Hive 
Metastore.
+The Hive catalog allows Flink to fully interoperate with Hive and serve as a 
more efficient query engine.
+Flink supports reading and writing Hive tables, using Hive UDFs, and even 
leveraging Hive’s metastore catalog to persist Flink specific 
metadata.&lt;/p&gt;
+
+&lt;h1 id=&quot;looking-ahead&quot;&gt;Looking Ahead&lt;/h1&gt;
+
+&lt;p&gt;Flink SQL has made enormous strides to democratize stream processing, 
and 1.10 marks a significant milestone in that development.
+However, we are not ones to rest on our laurels and, the community is 
committed to raising the bar on standards while lowering the barriers to entry.
+The community is looking to add more catalogs, such as JDBC and Apache Pulsar.
+We encourage you to sign up for the &lt;a 
href=&quot;https://flink.apache.org/community.html&quot;&gt;mailing 
list&lt;/a&gt; and stay on top of the announcements and new features in 
upcoming releases.&lt;/p&gt;
+
+&lt;hr /&gt;
+
+&lt;div class=&quot;footnotes&quot;&gt;
+  &lt;ol&gt;
+    &lt;li id=&quot;fn:1&quot;&gt;
+      &lt;p&gt;My colleague Timo, whose worked on Flink SQL from the 
beginning, has the entire SQL standard printed on his desk and references it 
before any changes are merged. It’s enormous. &lt;a href=&quot;#fnref:1&quot; 
class=&quot;reversefootnote&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
+    &lt;/li&gt;
+  &lt;/ol&gt;
+&lt;/div&gt;
+</description>
+<pubDate>Thu, 20 Feb 2020 13:00:00 +0100</pubDate>
+<link>https://flink.apache.org/news/2020/02/20/ddl.html</link>
+<guid isPermaLink="true">/news/2020/02/20/ddl.html</guid>
+</item>
+
+<item>
 <title>Apache Flink 1.10.0 Release Announcement</title>
 <description>&lt;p&gt;The Apache Flink community is excited to hit the double 
digits and announce the release of Flink 1.10.0! As a result of the biggest 
community effort to date, with over 1.2k issues implemented and more than 200 
contributors, this release introduces significant improvements to the overall 
performance and stability of Flink jobs, a preview of native Kubernetes 
integration and great advances in Python support (PyFlink).&lt;/p&gt;
 
diff --git a/content/blog/index.html b/content/blog/index.html
index 9b5a943..e79c091 100644
--- a/content/blog/index.html
+++ b/content/blog/index.html
@@ -187,6 +187,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2020/02/20/ddl.html">No Java 
Required: Configuring Sources and Sinks in SQL</a></h2>
+
+      <p>20 Feb 2020
+       Seth Wiesman (<a 
href="https://twitter.com/sjwiesman";>@sjwiesman</a>)</p>
+
+      <p>This post discusses the efforts of the Flink community as they relate 
to end to end applications with SQL in Apache Flink.</p>
+
+      <p><a href="/news/2020/02/20/ddl.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a 
href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 Release 
Announcement</a></h2>
 
       <p>11 Feb 2020
@@ -311,19 +324,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a 
href="/feature/2019/09/13/state-processor-api.html">The State Processor API: 
How to Read, write and modify the state of Flink applications</a></h2>
-
-      <p>13 Sep 2019
-       Seth Wiesman (<a href="https://twitter.com/sjwiesman";>@sjwiesman</a>) 
&amp; Fabian Hueske (<a href="https://twitter.com/fhueske";>@fhueske</a>)</p>
-
-      <p>This post explores the State Processor API, introduced with Flink 
1.9.0, why this feature is a big step for Flink, what you can use it for, how 
to use it and explores some future directions that align the feature with 
Apache Flink's evolution into a system for unified batch and stream 
processing.</p>
-
-      <p><a href="/feature/2019/09/13/state-processor-api.html">Continue 
reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -356,6 +356,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 
Release Announcement</a></li>
 
       
diff --git a/content/blog/page10/index.html b/content/blog/page10/index.html
index 3813aff..cec91a6 100644
--- a/content/blog/page10/index.html
+++ b/content/blog/page10/index.html
@@ -187,6 +187,24 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a 
href="/news/2015/03/02/february-2015-in-flink.html">February 2015 in the Flink 
community</a></h2>
+
+      <p>02 Mar 2015
+      </p>
+
+      <p><p>February might be the shortest month of the year, but this does not
+mean that the Flink community has not been busy adding features to the
+system and fixing bugs. Here’s a rundown of the activity in the Flink
+community last month.</p>
+
+</p>
+
+      <p><a href="/news/2015/03/02/february-2015-in-flink.html">Continue 
reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a 
href="/news/2015/02/09/streaming-example.html">Introducing Flink 
Streaming</a></h2>
 
       <p>09 Feb 2015
@@ -360,6 +378,16 @@ academic and open source project that Flink originates 
from.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 
Release Announcement</a></li>
 
       
diff --git a/content/blog/page2/index.html b/content/blog/page2/index.html
index e3328b7..13dd7f5 100644
--- a/content/blog/page2/index.html
+++ b/content/blog/page2/index.html
@@ -187,6 +187,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a 
href="/feature/2019/09/13/state-processor-api.html">The State Processor API: 
How to Read, write and modify the state of Flink applications</a></h2>
+
+      <p>13 Sep 2019
+       Seth Wiesman (<a href="https://twitter.com/sjwiesman";>@sjwiesman</a>) 
&amp; Fabian Hueske (<a href="https://twitter.com/fhueske";>@fhueske</a>)</p>
+
+      <p>This post explores the State Processor API, introduced with Flink 
1.9.0, why this feature is a big step for Flink, what you can use it for, how 
to use it and explores some future directions that align the feature with 
Apache Flink's evolution into a system for unified batch and stream 
processing.</p>
+
+      <p><a href="/feature/2019/09/13/state-processor-api.html">Continue 
reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a 
href="/news/2019/09/11/release-1.8.2.html">Apache Flink 1.8.2 Released</a></h2>
 
       <p>11 Sep 2019
@@ -310,19 +323,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/2019/05/03/pulsar-flink.html">When 
Flink & Pulsar Come Together</a></h2>
-
-      <p>03 May 2019
-       Sijie Guo (<a href="https://twitter.com/sijieg";>@sijieg</a>)</p>
-
-      <p>Apache Flink and Apache Pulsar are distributed data processing 
systems. When combined, they offer elastic data processing at large scale. This 
post describes how Pulsar and Flink can work together to provide a seamless 
developer experience.</p>
-
-      <p><a href="/2019/05/03/pulsar-flink.html">Continue reading 
&raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -355,6 +355,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 
Release Announcement</a></li>
 
       
diff --git a/content/blog/page3/index.html b/content/blog/page3/index.html
index 8ea0789..0b1cf3c 100644
--- a/content/blog/page3/index.html
+++ b/content/blog/page3/index.html
@@ -187,6 +187,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/2019/05/03/pulsar-flink.html">When 
Flink & Pulsar Come Together</a></h2>
+
+      <p>03 May 2019
+       Sijie Guo (<a href="https://twitter.com/sijieg";>@sijieg</a>)</p>
+
+      <p>Apache Flink and Apache Pulsar are distributed data processing 
systems. When combined, they offer elastic data processing at large scale. This 
post describes how Pulsar and Flink can work together to provide a seamless 
developer experience.</p>
+
+      <p><a href="/2019/05/03/pulsar-flink.html">Continue reading 
&raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2019/04/17/sod.html">Apache 
Flink's Application to Season of Docs</a></h2>
 
       <p>17 Apr 2019
@@ -317,21 +330,6 @@ for more details.</p>
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a 
href="/news/2018/12/22/release-1.6.3.html">Apache Flink 1.6.3 Released</a></h2>
-
-      <p>22 Dec 2018
-      </p>
-
-      <p><p>The Apache Flink community released the third bugfix version of 
the Apache Flink 1.6 series.</p>
-
-</p>
-
-      <p><a href="/news/2018/12/22/release-1.6.3.html">Continue reading 
&raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -364,6 +362,16 @@ for more details.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 
Release Announcement</a></li>
 
       
diff --git a/content/blog/page4/index.html b/content/blog/page4/index.html
index 3b3349c..79f0272 100644
--- a/content/blog/page4/index.html
+++ b/content/blog/page4/index.html
@@ -187,6 +187,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a 
href="/news/2018/12/22/release-1.6.3.html">Apache Flink 1.6.3 Released</a></h2>
+
+      <p>22 Dec 2018
+      </p>
+
+      <p><p>The Apache Flink community released the third bugfix version of 
the Apache Flink 1.6 series.</p>
+
+</p>
+
+      <p><a href="/news/2018/12/22/release-1.6.3.html">Continue reading 
&raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a 
href="/news/2018/12/21/release-1.7.1.html">Apache Flink 1.7.1 Released</a></h2>
 
       <p>21 Dec 2018
@@ -323,21 +338,6 @@ Please check the <a 
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a 
href="/news/2018/07/12/release-1.5.1.html">Apache Flink 1.5.1 Released</a></h2>
-
-      <p>12 Jul 2018
-      </p>
-
-      <p><p>The Apache Flink community released the first bugfix version of 
the Apache Flink 1.5 series.</p>
-
-</p>
-
-      <p><a href="/news/2018/07/12/release-1.5.1.html">Continue reading 
&raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -370,6 +370,16 @@ Please check the <a 
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 
Release Announcement</a></li>
 
       
diff --git a/content/blog/page5/index.html b/content/blog/page5/index.html
index d0f26df..5cc9053 100644
--- a/content/blog/page5/index.html
+++ b/content/blog/page5/index.html
@@ -187,6 +187,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a 
href="/news/2018/07/12/release-1.5.1.html">Apache Flink 1.5.1 Released</a></h2>
+
+      <p>12 Jul 2018
+      </p>
+
+      <p><p>The Apache Flink community released the first bugfix version of 
the Apache Flink 1.5 series.</p>
+
+</p>
+
+      <p><a href="/news/2018/07/12/release-1.5.1.html">Continue reading 
&raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a 
href="/news/2018/05/25/release-1.5.0.html">Apache Flink 1.5.0 Release 
Announcement</a></h2>
 
       <p>25 May 2018
@@ -320,21 +335,6 @@ what’s coming in Flink 1.4.0 as well as a preview of what 
the Flink community
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a 
href="/news/2017/08/05/release-1.3.2.html">Apache Flink 1.3.2 Released</a></h2>
-
-      <p>05 Aug 2017
-      </p>
-
-      <p><p>The Apache Flink community released the second bugfix version of 
the Apache Flink 1.3 series.</p>
-
-</p>
-
-      <p><a href="/news/2017/08/05/release-1.3.2.html">Continue reading 
&raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -367,6 +367,16 @@ what’s coming in Flink 1.4.0 as well as a preview of what 
the Flink community
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 
Release Announcement</a></li>
 
       
diff --git a/content/blog/page6/index.html b/content/blog/page6/index.html
index 459f71c..b6bf1ea 100644
--- a/content/blog/page6/index.html
+++ b/content/blog/page6/index.html
@@ -187,6 +187,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a 
href="/news/2017/08/05/release-1.3.2.html">Apache Flink 1.3.2 Released</a></h2>
+
+      <p>05 Aug 2017
+      </p>
+
+      <p><p>The Apache Flink community released the second bugfix version of 
the Apache Flink 1.3 series.</p>
+
+</p>
+
+      <p><a href="/news/2017/08/05/release-1.3.2.html">Continue reading 
&raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a 
href="/features/2017/07/04/flink-rescalable-state.html">A Deep Dive into 
Rescalable State in Apache Flink</a></h2>
 
       <p>04 Jul 2017 by Stefan Richter (<a 
href="https://twitter.com/";>@StefanRRichter</a>)
@@ -314,21 +329,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a 
href="/news/2016/12/21/release-1.1.4.html">Apache Flink 1.1.4 Released</a></h2>
-
-      <p>21 Dec 2016
-      </p>
-
-      <p><p>The Apache Flink community released the next bugfix version of the 
Apache Flink 1.1 series.</p>
-
-</p>
-
-      <p><a href="/news/2016/12/21/release-1.1.4.html">Continue reading 
&raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -361,6 +361,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 
Release Announcement</a></li>
 
       
diff --git a/content/blog/page7/index.html b/content/blog/page7/index.html
index a33d324..b803a0c 100644
--- a/content/blog/page7/index.html
+++ b/content/blog/page7/index.html
@@ -187,6 +187,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a 
href="/news/2016/12/21/release-1.1.4.html">Apache Flink 1.1.4 Released</a></h2>
+
+      <p>21 Dec 2016
+      </p>
+
+      <p><p>The Apache Flink community released the next bugfix version of the 
Apache Flink 1.1 series.</p>
+
+</p>
+
+      <p><a href="/news/2016/12/21/release-1.1.4.html">Continue reading 
&raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a 
href="/news/2016/12/19/2016-year-in-review.html">Apache Flink in 2016: Year in 
Review</a></h2>
 
       <p>19 Dec 2016 by Mike Winters
@@ -318,21 +333,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a 
href="/news/2016/04/14/flink-forward-announce.html">Flink Forward 2016 Call for 
Submissions Is Now Open</a></h2>
-
-      <p>14 Apr 2016 by Aljoscha Krettek (<a 
href="https://twitter.com/";>@aljoscha</a>)
-      </p>
-
-      <p><p>We are happy to announce that the call for submissions for Flink 
Forward 2016 is now open! The conference will take place September 12-14, 2016 
in Berlin, Germany, bringing together the open source stream processing 
community. Most Apache Flink committers will attend the conference, making it 
the ideal venue to learn more about the project and its roadmap and connect 
with the community.</p>
-
-</p>
-
-      <p><a href="/news/2016/04/14/flink-forward-announce.html">Continue 
reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -365,6 +365,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 
Release Announcement</a></li>
 
       
diff --git a/content/blog/page8/index.html b/content/blog/page8/index.html
index e415099..78cf897 100644
--- a/content/blog/page8/index.html
+++ b/content/blog/page8/index.html
@@ -187,6 +187,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a 
href="/news/2016/04/14/flink-forward-announce.html">Flink Forward 2016 Call for 
Submissions Is Now Open</a></h2>
+
+      <p>14 Apr 2016 by Aljoscha Krettek (<a 
href="https://twitter.com/";>@aljoscha</a>)
+      </p>
+
+      <p><p>We are happy to announce that the call for submissions for Flink 
Forward 2016 is now open! The conference will take place September 12-14, 2016 
in Berlin, Germany, bringing together the open source stream processing 
community. Most Apache Flink committers will attend the conference, making it 
the ideal venue to learn more about the project and its roadmap and connect 
with the community.</p>
+
+</p>
+
+      <p><a href="/news/2016/04/14/flink-forward-announce.html">Continue 
reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a 
href="/news/2016/04/06/cep-monitoring.html">Introducing Complex Event 
Processing (CEP) with Apache Flink</a></h2>
 
       <p>06 Apr 2016 by Till Rohrmann (<a 
href="https://twitter.com/";>@stsffap</a>)
@@ -314,20 +329,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a 
href="/news/2015/09/16/off-heap-memory.html">Off-heap Memory in Apache Flink 
and the curious JIT compiler</a></h2>
-
-      <p>16 Sep 2015 by Stephan Ewen (<a 
href="https://twitter.com/";>@stephanewen</a>)
-      </p>
-
-      <p><p>Running data-intensive code in the JVM and making it well-behaved 
is tricky. Systems that put billions of data objects naively onto the JVM heap 
face unpredictable OutOfMemoryErrors and Garbage Collection stalls. Of course, 
you still want to to keep your data in memory as much as possible, for speed 
and responsiveness of the processing applications. In that context, 
&quot;off-heap&quot; has become almost something like a magic word to solve 
these problems.</p>
-<p>In this blog post, we will look at how Flink exploits off-heap memory. The 
feature is part of the upcoming release, but you can try it out with the latest 
nightly builds. We will also give a few interesting insights into the behavior 
for Java's JIT compiler for highly optimized methods and loops.</p></p>
-
-      <p><a href="/news/2015/09/16/off-heap-memory.html">Continue reading 
&raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -360,6 +361,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 
Release Announcement</a></li>
 
       
diff --git a/content/blog/page9/index.html b/content/blog/page9/index.html
index 76f6010..df5157a 100644
--- a/content/blog/page9/index.html
+++ b/content/blog/page9/index.html
@@ -187,6 +187,20 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a 
href="/news/2015/09/16/off-heap-memory.html">Off-heap Memory in Apache Flink 
and the curious JIT compiler</a></h2>
+
+      <p>16 Sep 2015 by Stephan Ewen (<a 
href="https://twitter.com/";>@stephanewen</a>)
+      </p>
+
+      <p><p>Running data-intensive code in the JVM and making it well-behaved 
is tricky. Systems that put billions of data objects naively onto the JVM heap 
face unpredictable OutOfMemoryErrors and Garbage Collection stalls. Of course, 
you still want to to keep your data in memory as much as possible, for speed 
and responsiveness of the processing applications. In that context, 
&quot;off-heap&quot; has become almost something like a magic word to solve 
these problems.</p>
+<p>In this blog post, we will look at how Flink exploits off-heap memory. The 
feature is part of the upcoming release, but you can try it out with the latest 
nightly builds. We will also give a few interesting insights into the behavior 
for Java's JIT compiler for highly optimized methods and loops.</p></p>
+
+      <p><a href="/news/2015/09/16/off-heap-memory.html">Continue reading 
&raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a 
href="/news/2015/09/03/flink-forward.html">Announcing Flink Forward 
2015</a></h2>
 
       <p>03 Sep 2015
@@ -328,24 +342,6 @@ release is a preview release that contains known 
issues.</p>
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a 
href="/news/2015/03/02/february-2015-in-flink.html">February 2015 in the Flink 
community</a></h2>
-
-      <p>02 Mar 2015
-      </p>
-
-      <p><p>February might be the shortest month of the year, but this does not
-mean that the Flink community has not been busy adding features to the
-system and fixing bugs. Here’s a rundown of the activity in the Flink
-community last month.</p>
-
-</p>
-
-      <p><a href="/news/2015/03/02/february-2015-in-flink.html">Continue 
reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -378,6 +374,16 @@ community last month.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 1.10.0 
Release Announcement</a></li>
 
       
diff --git a/content/index.html b/content/index.html
index c442619..bf6e655 100644
--- a/content/index.html
+++ b/content/index.html
@@ -559,6 +559,9 @@
 
   <dl>
       
+        <dt> <a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></dt>
+        <dd>This post discusses the efforts of the Flink community as they 
relate to end to end applications with SQL in Apache Flink.</dd>
+      
         <dt> <a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 
1.10.0 Release Announcement</a></dt>
         <dd><p>The Apache Flink community is excited to hit the double digits 
and announce the release of Flink 1.10.0! As a result of the biggest community 
effort to date, with over 1.2k issues implemented and more than 200 
contributors, this release introduces significant improvements to the overall 
performance and stability of Flink jobs, a preview of native Kubernetes 
integration and great advances in Python support (PyFlink).</p>
 
@@ -574,9 +577,6 @@
       
         <dt> <a 
href="/news/2020/01/29/state-unlocked-interacting-with-state-in-apache-flink.html">State
 Unlocked: Interacting with State in Apache Flink</a></dt>
         <dd>This post discusses the efforts of the Flink community as they 
relate to state management in Apache Flink. We showcase some practical examples 
of how the different features and APIs can be utilized and cover some future 
ideas for new and improved ways of managing state in Apache Flink.</dd>
-      
-        <dt> <a href="/news/2020/01/15/demo-fraud-detection.html">Advanced 
Flink Application Patterns Vol.1: Case Study of a Fraud Detection 
System</a></dt>
-        <dd>In this series of blog posts you will learn about three powerful 
Flink patterns for building streaming applications.</dd>
     
   </dl>
 
diff --git a/content/news/2020/02/20/ddl.html b/content/news/2020/02/20/ddl.html
new file mode 100644
index 0000000..67cddc7
--- /dev/null
+++ b/content/news/2020/02/20/ddl.html
@@ -0,0 +1,358 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <!-- The above 3 meta tags *must* come first in the head; any other head 
content must come *after* these tags -->
+    <title>Apache Flink: No Java Required: Configuring Sources and Sinks in 
SQL</title>
+    <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
+    <link rel="icon" href="/favicon.ico" type="image/x-icon">
+
+    <!-- Bootstrap -->
+    <link rel="stylesheet" 
href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css";>
+    <link rel="stylesheet" href="/css/flink.css">
+    <link rel="stylesheet" href="/css/syntax.css">
+
+    <!-- Blog RSS feed -->
+    <link href="/blog/feed.xml" rel="alternate" type="application/rss+xml" 
title="Apache Flink Blog: RSS feed" />
+
+    <!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
+    <!-- We need to load Jquery in the header for custom google analytics 
event tracking-->
+    <script src="/js/jquery.min.js"></script>
+
+    <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media 
queries -->
+    <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
+    <!--[if lt IE 9]>
+      <script 
src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js";></script>
+      <script 
src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js";></script>
+    <![endif]-->
+  </head>
+  <body>  
+    
+
+    <!-- Main content. -->
+    <div class="container">
+    <div class="row">
+
+      
+     <div id="sidebar" class="col-sm-3">
+        
+
+<!-- Top navbar. -->
+    <nav class="navbar navbar-default">
+        <!-- The logo. -->
+        <div class="navbar-header">
+          <button type="button" class="navbar-toggle collapsed" 
data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+          </button>
+          <div class="navbar-logo">
+            <a href="/">
+              <img alt="Apache Flink" src="/img/flink-header-logo.svg" 
width="147px" height="73px">
+            </a>
+          </div>
+        </div><!-- /.navbar-header -->
+
+        <!-- The navigation links. -->
+        <div class="collapse navbar-collapse" 
id="bs-example-navbar-collapse-1">
+          <ul class="nav navbar-nav navbar-main">
+
+            <!-- First menu section explains visitors what Flink is -->
+
+            <!-- What is Stream Processing? -->
+            <!--
+            <li><a href="/streamprocessing1.html">What is Stream 
Processing?</a></li>
+            -->
+
+            <!-- What is Flink? -->
+            <li><a href="/flink-architecture.html">What is Apache 
Flink?</a></li>
+
+            
+
+            <!-- Use cases -->
+            <li><a href="/usecases.html">Use Cases</a></li>
+
+            <!-- Powered by -->
+            <li><a href="/poweredby.html">Powered By</a></li>
+
+            <!-- FAQ -->
+            <li><a href="/faq.html">FAQ</a></li>
+
+            &nbsp;
+            <!-- Second menu section aims to support Flink users -->
+
+            <!-- Downloads -->
+            <li><a href="/downloads.html">Downloads</a></li>
+
+            <!-- Getting Started -->
+            <li>
+              <a 
href="https://ci.apache.org/projects/flink/flink-docs-release-1.10/getting-started/index.html";
 target="_blank">Getting Started <small><span class="glyphicon 
glyphicon-new-window"></span></small></a>
+            </li>
+
+            <!-- Documentation -->
+            <li class="dropdown">
+              <a class="dropdown-toggle" data-toggle="dropdown" 
href="#">Documentation<span class="caret"></span></a>
+              <ul class="dropdown-menu">
+                <li><a 
href="https://ci.apache.org/projects/flink/flink-docs-release-1.10"; 
target="_blank">1.10 (Latest stable release) <small><span class="glyphicon 
glyphicon-new-window"></span></small></a></li>
+                <li><a 
href="https://ci.apache.org/projects/flink/flink-docs-master"; 
target="_blank">Master (Latest Snapshot) <small><span class="glyphicon 
glyphicon-new-window"></span></small></a></li>
+              </ul>
+            </li>
+
+            <!-- getting help -->
+            <li><a href="/gettinghelp.html">Getting Help</a></li>
+
+            <!-- Blog -->
+            <li class="active"><a href="/blog/"><b>Flink Blog</b></a></li>
+
+
+            <!-- Flink-packages -->
+            <li>
+              <a href="https://flink-packages.org"; 
target="_blank">flink-packages.org <small><span class="glyphicon 
glyphicon-new-window"></span></small></a>
+            </li>
+            &nbsp;
+
+            <!-- Third menu section aim to support community and contributors 
-->
+
+            <!-- Community -->
+            <li><a href="/community.html">Community &amp; Project Info</a></li>
+
+            <!-- Roadmap -->
+            <li><a href="/roadmap.html">Roadmap</a></li>
+
+            <!-- Contribute -->
+            <li><a href="/contributing/how-to-contribute.html">How to 
Contribute</a></li>
+            
+
+            <!-- GitHub -->
+            <li>
+              <a href="https://github.com/apache/flink"; target="_blank">Flink 
on GitHub <small><span class="glyphicon 
glyphicon-new-window"></span></small></a>
+            </li>
+
+            &nbsp;
+
+            <!-- Language Switcher -->
+            <li>
+              
+                
+                  <!-- link to the Chinese home page when current is blog page 
-->
+                  <a href="/zh">中文版</a>
+                
+              
+            </li>
+
+          </ul>
+
+          <ul class="nav navbar-nav navbar-bottom">
+          <hr />
+
+            <!-- Twitter -->
+            <li><a href="https://twitter.com/apacheflink"; 
target="_blank">@ApacheFlink <small><span class="glyphicon 
glyphicon-new-window"></span></small></a></li>
+
+            <!-- Visualizer -->
+            <li class=" hidden-md hidden-sm"><a href="/visualizer/" 
target="_blank">Plan Visualizer <small><span class="glyphicon 
glyphicon-new-window"></span></small></a></li>
+
+          <hr />
+
+            <li><a href="https://apache.org"; target="_blank">Apache Software 
Foundation <small><span class="glyphicon 
glyphicon-new-window"></span></small></a></li>
+
+            <li>
+              <style>
+                .smalllinks:link {
+                  display: inline-block !important; background: none; 
padding-top: 0px; padding-bottom: 0px; padding-right: 0px; min-width: 75px;
+                }
+              </style>
+
+              <a class="smalllinks" href="https://www.apache.org/licenses/"; 
target="_blank">License</a> <small><span class="glyphicon 
glyphicon-new-window"></span></small>
+
+              <a class="smalllinks" href="https://www.apache.org/security/"; 
target="_blank">Security</a> <small><span class="glyphicon 
glyphicon-new-window"></span></small>
+
+              <a class="smalllinks" 
href="https://www.apache.org/foundation/sponsorship.html"; 
target="_blank">Donate</a> <small><span class="glyphicon 
glyphicon-new-window"></span></small>
+
+              <a class="smalllinks" 
href="https://www.apache.org/foundation/thanks.html"; target="_blank">Thanks</a> 
<small><span class="glyphicon glyphicon-new-window"></span></small>
+            </li>
+
+          </ul>
+        </div><!-- /.navbar-collapse -->
+    </nav>
+
+      </div>
+      <div class="col-sm-9">
+      <div class="row-fluid">
+  <div class="col-sm-12">
+    <div class="row">
+      <h1>No Java Required: Configuring Sources and Sinks in SQL</h1>
+
+      <article>
+        <p>20 Feb 2020 Seth Wiesman (<a 
href="https://twitter.com/sjwiesman";>@sjwiesman</a>)</p>
+
+<h1 id="introduction">Introduction</h1>
+
+<p>The recent <a 
href="https://flink.apache.org/news/2020/02/11/release-1.10.0.html";>Apache 
Flink 1.10 release</a> includes many exciting features.
+In particular, it marks the end of the community’s year-long effort to merge 
in the <a 
href="https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html";>Blink
 SQL contribution</a> from Alibaba.
+The reason the community chose to spend so much time on the contribution is 
that SQL works.
+It allows Flink to offer a truly unified interface over batch and streaming 
and makes stream processing accessible to a broad audience of developers and 
analysts.
+Best of all, Flink SQL is ANSI-SQL compliant, which means if you’ve ever used 
a database in the past, you already know it<sup id="fnref:1"><a href="#fn:1" 
class="footnote">1</a></sup>!</p>
+
+<p>A lot of work focused on improving runtime performance and progressively 
extending its coverage of the SQL standard.
+Flink now supports the full TPC-DS query set for batch queries, reflecting the 
readiness of its SQL engine to address the needs of modern data warehouse-like 
workloads.
+Its streaming SQL supports an almost equal set of features - those that are 
well defined on a streaming runtime - including <a 
href="https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/streaming/joins.html";>complex
 joins</a> and <a 
href="https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/match_recognize.html";>MATCH_RECOGNIZE</a>.</p>
+
+<p>As important as this work is, the community also strives to make these 
features generally accessible to the broadest audience possible.
+That is why the Flink community is excited in 1.10 to offer production-ready 
DDL syntax (e.g., <code>CREATE TABLE</code>, <code>DROP TABLE</code>) and a 
refactored catalog interface.</p>
+
+<h1 id="accessing-your-data-where-it-lives">Accessing Your Data Where It 
Lives</h1>
+
+<p>Flink does not store data at rest; it is a compute engine and requires 
other systems to consume input from and write its output.
+Those that have used Flink’s <code>DataStream</code> API in the past will be 
familiar with connectors that allow for interacting with external systems. 
+Flink has a vast connector ecosystem that includes all major message queues, 
filesystems, and databases.</p>
+
+<div class="alert alert-info">
+If your favorite system does not have a connector maintained in the central 
Apache Flink repository, check out the <a 
href="https://flink-packages.org";>flink packages website</a>, which has a 
growing number of community-maintained components.
+</div>
+
+<p>While these connectors are battle-tested and production-ready, they are 
written in Java and configured in code, which means they are not amenable to 
pure SQL or Table applications.
+For a holistic SQL experience, not only queries need to be written in SQL, but 
also table definitions.</p>
+
+<h1 id="create-table-statements">CREATE TABLE Statements</h1>
+
+<p>While Flink SQL has long provided table abstractions atop some of Flink’s 
most popular connectors, configurations were not always so straightforward.
+Beginning in 1.10, Flink supports defining tables through <code>CREATE 
TABLE</code> statements.
+With this feature, users can now create logical tables, backed by various 
external systems, in pure SQL.</p>
+
+<p>By defining tables in SQL, developers can write queries against logical 
schemas that are abstracted away from the underlying physical data store. 
Coupled with Flink SQL’s unified approach to batch and stream processing, Flink 
provides a straight line from discovery to production.</p>
+
+<p>Users can define tables over static data sets, anything from a local CSV 
file to a full-fledged data lake or even Hive.
+Leveraging Flink’s efficient batch processing capabilities, they can perform 
ad-hoc queries searching for exciting insights.
+Once something interesting is identified, businesses can gain real-time and 
continuous insights by merely altering the table so that it is powered by a 
message queue such as Kafka.
+Because Flink guarantees SQL queries have unified semantics over batch and 
streaming, users can be confident that redeploying this query as a continuous 
streaming application over a message queue will output identical results.</p>
+
+<figure class="highlight"><pre><code class="language-sql" 
data-lang="sql"><span class="c1">-- Define a table called orders that is backed 
by a Kafka topic</span>
+<span class="c1">-- The definition includes all relevant Kafka 
properties,</span>
+<span class="c1">-- the underlying format (JSON) and even defines a</span>
+<span class="c1">-- watermarking algorithm based on one of the fields</span>
+<span class="c1">-- so that this table can be used with event time.</span>
+<span class="k">CREATE</span> <span class="k">TABLE</span> <span 
class="n">orders</span> <span class="p">(</span>
+       <span class="n">user_id</span>    <span class="nb">BIGINT</span><span 
class="p">,</span>
+       <span class="n">product</span>    <span class="n">STRING</span><span 
class="p">,</span>
+       <span class="n">order_time</span> <span class="k">TIMESTAMP</span><span 
class="p">(</span><span class="mi">3</span><span class="p">),</span>
+       <span class="n">WATERMARK</span> <span class="k">FOR</span> <span 
class="n">order_time</span> <span class="k">AS</span> <span 
class="n">order_time</span> <span class="o">-</span> <span 
class="s1">&#39;5&#39;</span> <span class="n">SECONDS</span>
+<span class="p">)</span> <span class="k">WITH</span> <span class="p">(</span>
+       <span class="s1">&#39;connector.type&#39;</span>         <span 
class="o">=</span> <span class="s1">&#39;kafka&#39;</span><span 
class="p">,</span>
+       <span class="s1">&#39;connector.version&#39;</span>      <span 
class="o">=</span> <span class="s1">&#39;universal&#39;</span><span 
class="p">,</span>
+       <span class="s1">&#39;connector.topic&#39;</span>        <span 
class="o">=</span> <span class="s1">&#39;orders&#39;</span><span 
class="p">,</span>
+       <span class="s1">&#39;connector.startup-mode&#39;</span> <span 
class="o">=</span> <span class="s1">&#39;earliest-offset&#39;</span><span 
class="p">,</span>
+       <span 
class="s1">&#39;connector.properties.bootstrap.servers&#39;</span> <span 
class="o">=</span> <span class="s1">&#39;localhost:9092&#39;</span><span 
class="p">,</span>
+       <span class="s1">&#39;format.type&#39;</span> <span class="o">=</span> 
<span class="s1">&#39;json&#39;</span> 
+<span class="p">);</span>
+
+<span class="c1">-- Define a table called product_analysis</span>
+<span class="c1">-- on top of ElasticSearch 7 where we </span>
+<span class="c1">-- can write the results of our query. </span>
+<span class="k">CREATE</span> <span class="k">TABLE</span> <span 
class="n">product_analysis</span> <span class="p">(</span>
+       <span class="n">product</span>  <span class="n">STRING</span><span 
class="p">,</span>
+       <span class="n">tracking_time</span>    <span 
class="k">TIMESTAMP</span><span class="p">(</span><span 
class="mi">3</span><span class="p">),</span>
+       <span class="n">units_sold</span>       <span class="nb">BIGINT</span>
+<span class="p">)</span> <span class="k">WITH</span> <span class="p">(</span>
+       <span class="s1">&#39;connector.type&#39;</span>    <span 
class="o">=</span> <span class="s1">&#39;elasticsearch&#39;</span><span 
class="p">,</span>
+       <span class="s1">&#39;connector.version&#39;</span> <span 
class="o">=</span> <span class="s1">&#39;7&#39;</span><span class="p">,</span>
+       <span class="s1">&#39;connector.hosts&#39;</span>   <span 
class="o">=</span> <span class="s1">&#39;localhost:9200&#39;</span><span 
class="p">,</span>
+       <span class="s1">&#39;connector.index&#39;</span>   <span 
class="o">=</span> <span class="s1">&#39;ProductAnalysis&#39;</span><span 
class="p">,</span>
+       <span class="s1">&#39;connector.document.type&#39;</span> <span 
class="o">=</span> <span class="s1">&#39;analysis&#39;</span> 
+<span class="p">);</span>
+
+<span class="c1">-- A simple query that analyzes order data</span>
+<span class="c1">-- from Kafka and writes results into </span>
+<span class="c1">-- ElasticSearch. </span>
+<span class="k">INSERT</span> <span class="k">INTO</span> <span 
class="n">product_analysis</span>
+<span class="k">SELECT</span>
+       <span class="n">product_id</span><span class="p">,</span>
+       <span class="n">TUMBLE_START</span><span class="p">(</span><span 
class="n">order_time</span><span class="p">,</span> <span 
class="nb">INTERVAL</span> <span class="s1">&#39;1&#39;</span> <span 
class="k">DAY</span><span class="p">)</span> <span class="k">as</span> <span 
class="n">tracking_time</span><span class="p">,</span>
+       <span class="k">COUNT</span><span class="p">(</span><span 
class="o">*</span><span class="p">)</span> <span class="k">as</span> <span 
class="n">units_sold</span>
+<span class="k">FROM</span> <span class="n">orders</span>
+<span class="k">GROUP</span> <span class="k">BY</span>
+       <span class="n">product_id</span><span class="p">,</span>
+       <span class="n">TUMBLE</span><span class="p">(</span><span 
class="n">order_time</span><span class="p">,</span> <span 
class="nb">INTERVAL</span> <span class="s1">&#39;1&#39;</span> <span 
class="k">DAY</span><span class="p">);</span></code></pre></figure>
+
+<h1 id="catalogs">Catalogs</h1>
+
+<p>While being able to create tables is important, it often isn’t enough.
+A business analyst, for example, shouldn’t have to know what properties to set 
for Kafka, or even have to know what the underlying data source is, to be able 
to write a query.</p>
+
+<p>To solve this problem, Flink 1.10 also ships with a revamped catalog system 
for managing metadata about tables and user definined functions.
+With catalogs, users can create tables once and reuse them across Jobs and 
Sessions.
+Now, the team managing a data set can create a table and immediately make it 
accessible to other groups within their organization.</p>
+
+<p>The most notable catalog that Flink integrates with today is Hive Metastore.
+The Hive catalog allows Flink to fully interoperate with Hive and serve as a 
more efficient query engine.
+Flink supports reading and writing Hive tables, using Hive UDFs, and even 
leveraging Hive’s metastore catalog to persist Flink specific metadata.</p>
+
+<h1 id="looking-ahead">Looking Ahead</h1>
+
+<p>Flink SQL has made enormous strides to democratize stream processing, and 
1.10 marks a significant milestone in that development.
+However, we are not ones to rest on our laurels and, the community is 
committed to raising the bar on standards while lowering the barriers to entry.
+The community is looking to add more catalogs, such as JDBC and Apache Pulsar.
+We encourage you to sign up for the <a 
href="https://flink.apache.org/community.html";>mailing list</a> and stay on top 
of the announcements and new features in upcoming releases.</p>
+
+<hr />
+
+<div class="footnotes">
+  <ol>
+    <li id="fn:1">
+      <p>My colleague Timo, whose worked on Flink SQL from the beginning, has 
the entire SQL standard printed on his desk and references it before any 
changes are merged. It’s enormous. <a href="#fnref:1" 
class="reversefootnote">&#8617;</a></p>
+    </li>
+  </ol>
+</div>
+
+      </article>
+    </div>
+
+    <div class="row">
+      <div id="disqus_thread"></div>
+      <script type="text/javascript">
+        /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE 
* * */
+        var disqus_shortname = 'stratosphere-eu'; // required: replace example 
with your forum shortname
+
+        /* * * DON'T EDIT BELOW THIS LINE * * */
+        (function() {
+            var dsq = document.createElement('script'); dsq.type = 
'text/javascript'; dsq.async = true;
+            dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
+             (document.getElementsByTagName('head')[0] || 
document.getElementsByTagName('body')[0]).appendChild(dsq);
+        })();
+      </script>
+    </div>
+  </div>
+</div>
+      </div>
+    </div>
+
+    <hr />
+
+    <div class="row">
+      <div class="footer text-center col-sm-12">
+        <p>Copyright © 2014-2019 <a href="http://apache.org";>The Apache 
Software Foundation</a>. All Rights Reserved.</p>
+        <p>Apache Flink, Flink®, Apache®, the squirrel logo, and the Apache 
feather logo are either registered trademarks or trademarks of The Apache 
Software Foundation.</p>
+        <p><a href="/privacy-policy.html">Privacy Policy</a> &middot; <a 
href="/blog/feed.xml">RSS feed</a></p>
+      </div>
+    </div>
+    </div><!-- /.container -->
+
+    <!-- Include all compiled plugins (below), or include individual files as 
needed -->
+    <script 
src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js";></script>
+    <script 
src="https://cdnjs.cloudflare.com/ajax/libs/jquery.matchHeight/0.7.0/jquery.matchHeight-min.js";></script>
+    <script src="/js/codetabs.js"></script>
+    <script src="/js/stickysidebar.js"></script>
+
+    <!-- Google Analytics -->
+    <script>
+      
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+      (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new 
Date();a=s.createElement(o),
+      
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+      
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+
+      ga('create', 'UA-52545728-1', 'auto');
+      ga('send', 'pageview');
+    </script>
+  </body>
+</html>
diff --git a/content/zh/index.html b/content/zh/index.html
index 504a7aa..6971060 100644
--- a/content/zh/index.html
+++ b/content/zh/index.html
@@ -556,6 +556,9 @@
 
   <dl>
       
+        <dt> <a href="/news/2020/02/20/ddl.html">No Java Required: Configuring 
Sources and Sinks in SQL</a></dt>
+        <dd>This post discusses the efforts of the Flink community as they 
relate to end to end applications with SQL in Apache Flink.</dd>
+      
         <dt> <a href="/news/2020/02/11/release-1.10.0.html">Apache Flink 
1.10.0 Release Announcement</a></dt>
         <dd><p>The Apache Flink community is excited to hit the double digits 
and announce the release of Flink 1.10.0! As a result of the biggest community 
effort to date, with over 1.2k issues implemented and more than 200 
contributors, this release introduces significant improvements to the overall 
performance and stability of Flink jobs, a preview of native Kubernetes 
integration and great advances in Python support (PyFlink).</p>
 
@@ -571,9 +574,6 @@
       
         <dt> <a 
href="/news/2020/01/29/state-unlocked-interacting-with-state-in-apache-flink.html">State
 Unlocked: Interacting with State in Apache Flink</a></dt>
         <dd>This post discusses the efforts of the Flink community as they 
relate to state management in Apache Flink. We showcase some practical examples 
of how the different features and APIs can be utilized and cover some future 
ideas for new and improved ways of managing state in Apache Flink.</dd>
-      
-        <dt> <a href="/news/2020/01/15/demo-fraud-detection.html">Advanced 
Flink Application Patterns Vol.1: Case Study of a Fraud Detection 
System</a></dt>
-        <dd>In this series of blog posts you will learn about three powerful 
Flink patterns for building streaming applications.</dd>
     
   </dl>
 

Reply via email to