This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 465058d4218abe2fab4e27877d762d5083c8c322
Author: Daisy T <dais...@gmx.com>
AuthorDate: Wed Aug 11 12:20:25 2021 +0200

    Add blog post "Implementing a Custom Source Connector for Table API and SQL"
    
    This closes #460.
---
 _posts/2021-09-07-connector-table-sql-api-part1.md | 241 ++++++++++
 _posts/2021-09-07-connector-table-sql-api-part2.md | 519 +++++++++++++++++++++
 css/flink.css                                      |  12 +
 .../VVP-SQL-Editor.png                             | Bin 0 -> 165632 bytes
 .../flink-sql-client.png                           | Bin 0 -> 26982 bytes
 5 files changed, 772 insertions(+)

diff --git a/_posts/2021-09-07-connector-table-sql-api-part1.md 
b/_posts/2021-09-07-connector-table-sql-api-part1.md
new file mode 100644
index 0000000..d224a98
--- /dev/null
+++ b/_posts/2021-09-07-connector-table-sql-api-part1.md
@@ -0,0 +1,241 @@
+---
+layout: post
+title:  "Implementing a Custom Source Connector for Table API and SQL - Part 
One "
+date: 2021-09-07T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+- Daisy Tsang:
+  name: "Daisy Tsang"
+---
+
+Part one of this tutorial will teach you how to build and run a custom source 
connector to be used with Table API and SQL, two high-level abstractions in 
Flink. The tutorial comes with a bundled 
[docker-compose](https://docs.docker.com/compose/) setup that lets you easily 
run the connector. You can then try it out with Flink’s SQL client.
+
+{% toc %}
+
+# Introduction
+
+Apache Flink is a data processing engine that aims to keep 
[state](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/state/state_backends/)
 locally in order to do computations efficiently. However, Flink does not "own" 
the data but relies on external systems to ingest and persist data. Connecting 
to external data input (**sources**) and external data storage (**sinks**) is 
usually summarized under the term **connectors** in Flink.
+
+Since connectors are such important components, Flink ships with [connectors 
for some popular 
systems](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/overview/).
 But sometimes you may need to read in an uncommon data format and what Flink 
provides is not enough. This is why Flink also provides extension points for 
building custom connectors if you want to connect to a system that is not 
supported by an existing connector.
+
+Once you have a source and a sink defined for Flink, you can use its 
declarative APIs (in the form of the [Table API and 
SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/overview/))
 to execute queries for data analysis.
+
+The **Table API** provides more programmatic access while **SQL** is a more 
universal query language. It is named Table API because of its relational 
functions on tables: how to obtain a table, how to output a table, and how to 
perform query operations on the table.
+
+In this two-part tutorial, you will explore some of these APIs and concepts by 
implementing your own custom source connector for reading in data from an email 
inbox. You will then use Flink to process emails through the [IMAP 
protocol](https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol).
+
+Part one will focus on building a custom source connector and [part 
two](/2021/09/07/connector-table-sql-api-part2) will focus on integrating it.
+
+
+# Prerequisites
+
+This tutorial assumes that you have some familiarity with Java and 
objected-oriented programming.
+
+You are encouraged to follow along with the code in this 
[repository](https://github.com/Airblader/blog-imap).
+
+It would also be useful to have 
[docker-compose](https://docs.docker.com/compose/install/) installed on your 
system in order to use the script included in the repository that builds and 
runs the connector.
+
+
+# Understand the infrastructure required for a connector
+
+In order to create a connector which works with Flink, you need:
+
+1. A _factory class_ (a blueprint for creating other objects from string 
properties) that tells Flink with which identifier (in this case, “imap”) our 
connector can be addressed, which configuration options it exposes, and how the 
connector can be instantiated. Since Flink uses the Java Service Provider 
Interface (SPI) to discover factories located in different modules, you will 
also need to add some configuration details.
+
+2. The _table source_ object as a specific instance of the connector during 
the planning stage. It is responsible for back and forth communication with the 
optimizer during the planning stage and is like another factory for creating 
connector runtime implementation. There are also more advanced features, such 
as 
[abilities](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/table/connector/source/abilities/package-summary.html),
 that can be implemented [...]
+
+3. A _runtime implementation_ from the connector obtained during the planning 
stage. The runtime logic is implemented in Flink's core connector interfaces 
and does the actual work of producing rows of dynamic table data. The runtime 
instances are shipped to the Flink cluster.
+
+Let us look at this sequence (factory class → table source → runtime 
implementation) in reverse order.
+
+# Establish the runtime implementation of the connector
+
+You first need to have a source connector which can be used in Flink's runtime 
system, defining how data goes in and how it can be executed in the cluster. 
There are a few different interfaces available for implementing the actual 
source of the data and have it be discoverable in Flink.
+
+For complex connectors, you may want to implement the [Source 
interface](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/api/connector/source/Source.html)
 which gives you a lot of control. For simpler use cases, you can use the 
[SourceFunction 
interface](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/streaming/api/functions/source/SourceFunction.html).
 There are already a few different implementations of Sourc [...]
+
+<div class="note">
+  <h5>Hint</h5>
+  <p>The Source interface is the new abstraction whereas the SourceFunction 
interface is slowly phasing out.
+     All connectors will eventually implement the Source interface.
+  </p>
+</div>
+
+`RichSourceFunction` is a base class for implementing a data source that has 
access to context information and some lifecycle methods. There is a `run()` 
method inherited from the `SourceFunction` interface that you need to 
implement. It is invoked once and can be used to produce the data either once 
for a bounded result or within a loop for an unbounded stream.
+
+For example, to create a bounded data source, you could implement this method 
so that it reads all existing emails and then closes. To create an unbounded 
source, you could only look at new emails coming in while the source is active. 
You can also combine these behaviors and expose them through configuration 
options.
+
+When you first create the class and implement the interface, it should look 
something like this:
+
+```java
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.table.data.RowData;
+
+public class ImapSource extends RichSourceFunction<RowData> {
+  @Override
+  public void run(SourceContext<RowData> ctx) throws Exception {}
+
+  @Override
+  public void cancel() {}
+}
+```
+
+Note that internal data structures (`RowData`) are used because that is 
required by the table runtime.
+
+In the `run()` method, you get access to a 
[context](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/streaming/api/functions/source/SourceFunction.SourceContext.html)
 object inherited from the SourceFunction interface, which is a bridge to Flink 
and allows you to output data. Since the source does not produce any data yet, 
the next step is to make it produce some static data in order to test that the 
data flows correctly:
+
+```java
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+
+public class ImapSource extends RichSourceFunction<RowData> {
+  @Override
+  public void run(SourceContext<RowData> ctx) throws Exception {
+      ctx.collect(GenericRowData.of(
+          StringData.fromString("Subject 1"),
+          StringData.fromString("Hello, World!")
+      ));
+  }
+
+  @Override
+  public void cancel(){}
+}
+```
+
+You do not need to implement the `cancel()` method yet because the source 
finishes instantly.
+
+# Create and configure a dynamic table source for the data stream
+
+[Dynamic 
tables](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/dynamic_tables/)
 are the core concept of Flink’s Table API and SQL support for streaming data 
and, like its name suggests, change over time. You can imagine a data stream 
being logically converted into a table that is constantly changing. For this 
tutorial, the emails that will be read in will be interpreted as a (source) 
table that is queryable. It can be viewed as a specific instance o [...]
+
+You will now implement a 
[DynamicTableSource](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/table/connector/source/DynamicTableSource.html)
 interface. There are two types of dynamic table sources: 
[ScanTableSource](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/table/connector/source/ScanTableSource.html)
 and 
[LookupTableSource](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache
 [...]
+
+This is what a scan table source implementation would look like:
+
+```java
+import org.apache.flink.table.connector.ChangelogMode;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.ScanTableSource;
+import org.apache.flink.table.connector.source.SourceFunctionProvider;
+
+public class ImapTableSource implements ScanTableSource {
+  @Override
+  public ChangelogMode getChangelogMode() {
+    return ChangelogMode.insertOnly();
+  }
+
+  @Override
+  public ScanRuntimeProvider getScanRuntimeProvider(ScanContext ctx) {
+    boolean bounded = true;
+    final ImapSource source = new ImapSource();
+    return SourceFunctionProvider.of(source, bounded);
+  }
+
+  @Override
+  public DynamicTableSource copy() {
+    return new ImapTableSource();
+  }
+
+  @Override
+  public String asSummaryString() {
+    return "IMAP Table Source";
+  }
+}
+```
+
+[ChangelogMode](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/table/connector/ChangelogMode.html)
 informs Flink of expected changes that the planner can expect during runtime. 
For example, whether the source produces only new rows, also updates to 
existing ones, or whether it can remove previously produced rows. Our source 
will only produce (`insertOnly()`) new rows.
+
+[ScanRuntimeProvider](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/table/connector/source/ScanTableSource.ScanRuntimeProvider.html)
 allows Flink to create the actual runtime implementation you established 
previously (for reading the data). Flink even provides utilities like 
[SourceFunctionProvider](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/table/connector/source/SourceFunctionProvider.html)
 to wrap it  [...]
+
+You will also need to indicate whether the source is bounded or not. 
Currently, this is the case but you will have to change this later.
+
+# Create a factory class for the connector so it can be discovered by Flink
+
+You now have a working source connector, but in order to use it in Table API 
or SQL, it needs to be discoverable by Flink. You also need to define how the 
connector is addressable from a SQL statement when creating a source table.
+
+You need to implement a 
[Factory](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/table/factories/Factory.html),
 which is a base interface that creates object instances from a list of 
key-value pairs in Flink's Table API and SQL.  A factory is uniquely identified 
by its class name and `factoryIdentifier()`.  For this tutorial, you will 
implement the more specific 
[DynamicTableSourceFactory](https://ci.apache.org/projects/flink/flink-docs-release-1.1
 [...]
+
+```java
+import java.util.HashSet;
+import java.util.Set;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.factories.DynamicTableSourceFactory;
+import org.apache.flink.table.factories.FactoryUtil;
+
+public class ImapTableSourceFactory implements DynamicTableSourceFactory {
+  @Override
+  public String factoryIdentifier() {
+    return "imap";
+  }
+
+  @Override
+  public Set<ConfigOption<?>> requiredOptions() {
+    return new HashSet<>();
+  }
+
+  @Override
+  public Set<ConfigOption<?>> optionalOptions() {
+    return new HashSet<>();
+  }
+
+  @Override
+  public DynamicTableSource createDynamicTableSource(Context ctx) {
+    final FactoryUtil.TableFactoryHelper factoryHelper = 
FactoryUtil.createTableFactoryHelper(this, ctx);
+    factoryHelper.validate();
+
+    return new ImapTableSource();
+  }
+}
+```
+
+There are currently no configuration options but they can be added and also 
validated within the `createDynamicTableSource()` function. There is a small 
helper utility, 
[TableFactoryHelper](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/table/factories/FactoryUtil.TableFactoryHelper.html),
 that Flink offers which ensures that required options are set and that no 
unknown options are provided.
+
+Finally, you need to register your factory for Java's Service Provider 
Interfaces (SPI). Classes that implement this interface can be discovered and 
should be added to this file 
`src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory` 
with the fully classified class name of your factory:
+
+```java
+// if you created your class in the package org.example.acme, it should be 
named the following:
+org.example.acme.ImapTableSourceFactory
+```
+
+# Test the custom connector
+
+You should now have a working source connector. If you are following along 
with the provided repository, you can test it by running:
+
+```sh
+$ cd testing/
+$ ./build_and_run.sh
+```
+
+This builds the connector, starts a Flink cluster, a [test email 
server](https://greenmail-mail-test.github.io/greenmail/) (which you will need 
later), and the [SQL 
client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sqlclient/)
 (which is bundled in the regular Flink distribution) for you. If successful, 
you should see the SQL CLI:
+
+<div class="row front-graphic">
+  <img src="{{ site.baseurl 
}}/img/blog/2021-09-07-connector-table-sql-api/flink-sql-client.png" alt="Flink 
SQL Client"/>
+       <p class="align-center">Flink SQL Client</p>
+</div>
+
+You can now create a table (with a "subject" column and a "content" column) 
with your connector by executing the following statement with the SQL client:
+
+```sql
+CREATE TABLE T (subject STRING, content STRING) WITH ('connector' = 'imap');
+
+SELECT * FROM T;
+```
+
+Note that the schema must be exactly as written since it is currently 
hardcoded into the connector.
+
+You should be able to see the static data you provided in your source 
connector earlier, which would be "Subject 1" and "Hello, World!".
+
+Now that you have a working connector, the next step is to make it do 
something more useful than returning static data.
+
+
+# Summary
+
+In this tutorial, you looked into the infrastructure required for a connector 
and configured its runtime implementation to define how it should be executed 
in a cluster. You also defined a dynamic table source that reads the entire 
stream-converted table from the external source, made the connector 
discoverable by Flink through creating a factory class for it, and then tested 
it.
+
+# Next Steps
+
+In [part two](/2021/09/07/connector-table-sql-api-part2), you will integrate 
this connector with an email inbox through the IMAP protocol.
diff --git a/_posts/2021-09-07-connector-table-sql-api-part2.md 
b/_posts/2021-09-07-connector-table-sql-api-part2.md
new file mode 100644
index 0000000..8bc3b6b
--- /dev/null
+++ b/_posts/2021-09-07-connector-table-sql-api-part2.md
@@ -0,0 +1,519 @@
+---
+layout: post
+title:  "Implementing a custom source connector for Table API and SQL - Part 
Two "
+date: 2021-09-07T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+- Daisy Tsang:
+  name: "Daisy Tsang"
+---
+
+In [part one](/2021/09/07/connector-table-sql-api-part1) of this tutorial, you 
learned how to build a custom source connector for Flink. In part two, you will 
learn how to integrate the connector with a test email inbox through the IMAP 
protocol and filter out emails using Flink SQL.
+
+{% toc %}
+
+# Goals
+
+Part two of the tutorial will teach you how to:
+
+- integrate a source connector which connects to a mailbox using the IMAP 
protocol
+- use [Jakarta Mail](https://eclipse-ee4j.github.io/mail/), a Java library 
that can send and receive email via the IMAP protocol
+- write [Flink 
SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/overview/)
 and execute the queries in the [Ververica 
Platform](https://www.ververica.com/apache-flink-sql-on-ververica-platform) for 
a nicer visualization
+
+You are encouraged to follow along with the code in this 
[repository](https://github.com/Airblader/blog-imap). It provides a boilerplate 
project that also comes with a bundled 
[docker-compose](https://docs.docker.com/compose/) setup that lets you easily 
run the connector. You can then try it out with Flink’s SQL client.
+
+
+# Prerequisites
+
+This tutorial assumes that you have:
+
+- followed the steps outlined in [part 
one](/2021/09/07/connector-table-sql-api-part1) of this tutorial
+- some familiarity with Java and objected-oriented programming
+
+
+# Understand how to fetch emails via the IMAP protocol
+
+Now that you have a working source connector that can run on Flink, it is time 
to connect to an email server via 
[IMAP](https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol) (an 
Internet protocol that allows email clients to retrieve messages from a mail 
server) so that Flink can process emails instead of test static data.
+
+You will use [Jakarta Mail](https://eclipse-ee4j.github.io/mail/), a Java 
library that can be used to send and receive email via IMAP. For simplicity, 
authentication will use a plain username and password.
+
+This tutorial will focus more on how to implement a connector for Flink. If 
you want to learn more about the details of how IMAP or Jakarta Mail work, you 
are encouraged to explore a more extensive implementation at this 
[repository](https://github.com/TNG/flink-connector-email). It offers a wide 
range of information to be read from emails, as well as options to ingest 
existing emails alongside new ones, connecting with SSL, and more. It also 
supports different formats for reading email  [...]
+
+In order to fetch emails, you will need to connect to the email server, 
register a listener for new emails and collect them whenever they arrive, and 
enter a loop to keep the connector running.
+
+
+# Add configuration options - server information and credentials
+
+In order to connect to your IMAP server, you will need at least the following:
+
+- hostname (of the mail server)
+- port number
+- username
+- password
+
+You will start by creating a class to encapsulate the configuration options. 
You will make use of [Lombok](https://projectlombok.org) to help with some 
boilerplate code. By adding the `@Data` and `@SuperBuilder` annotations, Lombok 
will generate these for all the fields of the immutable class.
+
+```java
+import lombok.Data;
+import lombok.experimental.SuperBuilder;
+import javax.annotation.Nullable;
+import java.io.Serializable;
+
+@Data
+@SuperBuilder(toBuilder = true)
+public class ImapSourceOptions implements Serializable {
+    private static final long serialVersionUID = 1L;
+
+    private final String host;
+    private final @Nullable Integer port;
+    private final @Nullable String user;
+    private final @Nullable String password;
+}
+```
+
+Now you can add an instance of this class to the `ImapSource` and 
`ImapTableSource` classes previously created (in part one) so it can be used 
there. Take note of the column names with which the table has been created. 
This will help later. You will also switch the source to be unbounded now as we 
will change the implementation in a bit to continuously listen for new emails.
+
+
+<div class="note">
+  <h5>Hint</h5>
+  <p>The column names would be "subject" and "content" with the SQL executed 
in part one:</p>
+  <pre><code class="language-sql">CREATE TABLE T (subject STRING, content 
STRING) WITH ('connector' = 'imap');</code></pre>
+</div>
+
+
+```java
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.table.data.RowData;
+import java.util.List;
+import java.util.stream.Collectors;
+
+public class ImapSource extends RichSourceFunction<RowData> {
+    private final ImapSourceOptions options;
+    private final List<String> columnNames;
+
+    public ImapSource(
+        ImapSourceOptions options,
+        List<String> columnNames
+    ) {
+        this.options = options;
+        this.columnNames = columnNames.stream()
+            .map(String::toUpperCase)
+            .collect(Collectors.toList());
+    }
+
+    // ...
+}
+```
+
+```java
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.ScanTableSource;
+import java.util.List;
+
+public class ImapTableSource implements ScanTableSource {
+
+    private final ImapSourceOptions options;
+    private final List<String> columnNames;
+
+    public ImapTableSource(
+        ImapSourceOptions options,
+        List<String> columnNames
+    ) {
+        this.options = options;
+        this.columnNames = columnNames;
+    }
+
+    // …
+
+    @Override
+    public ScanRuntimeProvider getScanRuntimeProvider(ScanContext ctx) {
+        final boolean bounded = false;
+        final ImapSource source = new ImapSource(options, columnNames);
+        return SourceFunctionProvider.of(source, bounded);
+    }
+
+    @Override
+    public DynamicTableSource copy() {
+        return new ImapTableSource(options, columnNames);
+    }
+
+    // …
+}
+```
+
+Finally, in the `ImapTableSourceFactory` class, you need to create a 
`ConfigOption<>` for the hostname, port number, username, and password.  Then 
you need to report them to Flink. Host, user, and password are mandatory and 
can be added to `requiredOptions()`; the port is optional and can be added to 
`optionalOptions()` instead.
+
+```java
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.table.factories.DynamicTableSourceFactory;
+
+import java.util.HashSet;
+import java.util.Set;
+
+public class ImapTableSourceFactory implements DynamicTableSourceFactory {
+
+    public static final ConfigOption<String> HOST = 
ConfigOptions.key("host").stringType().noDefaultValue();
+    public static final ConfigOption<Integer> PORT = 
ConfigOptions.key("port").intType().noDefaultValue();
+    public static final ConfigOption<String> USER = 
ConfigOptions.key("user").stringType().noDefaultValue();
+    public static final ConfigOption<String> PASSWORD = 
ConfigOptions.key("password").stringType().noDefaultValue();
+
+    // …
+
+    @Override
+    public Set<ConfigOption<?>> requiredOptions() {
+        final Set<ConfigOption<?>> options = new HashSet<>();
+        options.add(HOST);
+        options.add(USER);
+        options.add(PASSWORD);
+        return options;
+    }
+
+    @Override
+    public Set<ConfigOption<?>> optionalOptions() {
+        final Set<ConfigOption<?>> options = new HashSet<>();
+        options.add(PORT);
+        return options;
+    }
+    // …
+}
+```
+
+Now take a look at the `createDynamicTableSource()` function in the 
`ImapTableSourceFactory` class.  Recall that previously (in part one) you used 
a small helper utility 
[TableFactoryHelper](https://ci.apache.org/projects/flink/flink-docs-release-1.13/api/java/org/apache/flink/table/factories/FactoryUtil.TableFactoryHelper.html),
 that Flink offers which ensures that required options are set and that no 
unknown options are provided. You can now use it to automatically make sure 
that the r [...]
+
+```java
+import java.util.List;
+import java.util.stream.Collectors;
+import org.apache.flink.table.factories.DynamicTableSourceFactory;
+import org.apache.flink.table.factories.FactoryUtil;
+import org.apache.flink.table.catalog.Column;
+
+public class ImapTableSourceFactory implements DynamicTableSourceFactory {
+
+    // ...
+
+    @Override
+    public DynamicTableSource createDynamicTableSource(Context ctx) {
+        final FactoryUtil.TableFactoryHelper factoryHelper = 
FactoryUtil.createTableFactoryHelper(this, ctx);
+        factoryHelper.validate();
+
+        final ImapSourceOptions options = ImapSourceOptions.builder()
+            .host(factoryHelper.getOptions().get(HOST))
+            .port(factoryHelper.getOptions().get(PORT))
+            .user(factoryHelper.getOptions().get(USER))
+            .password(factoryHelper.getOptions().get(PASSWORD))
+            .build();
+
+        final List<String> columnNames = 
ctx.getCatalogTable().getResolvedSchema().getColumns().stream()
+            .filter(Column::isPhysical)
+            .map(Column::getName)
+            .collect(Collectors.toList());
+
+        return new ImapTableSource(options, columnNames);
+    }
+}
+```
+<div class="note">
+  <h5>Hint</h5>
+  <p>
+    Ideally, you would use connector <a 
href="https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/overview/#metadata";>metadata</a>
 instead of column names. You can refer again to the accompanying <a 
href="https://github.com/TNG/flink-connector-email";>repository</a> which does 
implement this using metadata fields.
+  </p>
+</div>
+
+To test these new configuration options, run:
+
+```sh
+$ cd testing/
+$ ./build_and_run.sh
+```
+
+Once you see the [Flink SQL 
client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sqlclient/)
 start up, execute the following statements to create a table with your 
connector:
+
+```sql
+CREATE TABLE T (subject STRING, content STRING) WITH ('connector' = 'imap');
+
+SELECT * FROM T;
+```
+
+This time it will fail because the required options are not provided:
+
+```
+[ERROR] Could not execute SQL statement. Reason:
+org.apache.flink.table.api.ValidationException: One or more required options 
are missing.
+
+Missing required options are:
+
+host
+password
+user
+```
+
+#  Connect to the source email server
+
+Now that you have configured the required options to connect to the email 
server, it is time to actually connect to the server.
+
+Going back to the `ImapSource` class, you first need to convert the options 
given to the table source into a 
[Properties](https://docs.oracle.com/javase/tutorial/essential/environment/properties.html)
 object, which is what you can pass to the Jakarta library. You can also set 
various other properties here as well (i.e. enabling SSL).
+
+The specific properties that the Jakarta library understands are documented 
[here](https://jakarta.ee/specifications/mail/1.6/apidocs/index.html?com/sun/mail/imap/package-summary.html).
+
+
+```java
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.table.data.RowData;
+import java.util.Properties;
+
+public class ImapSource extends RichSourceFunction<RowData> {
+   // …
+
+   private Properties getSessionProperties() {
+        Properties props = new Properties();
+        props.put("mail.store.protocol", "imap");
+        props.put("mail.imap.auth", true);
+        props.put("mail.imap.host", options.getHost());
+        if (options.getPort() != null) {
+            props.put("mail.imap.port", options.getPort());
+        }
+
+        return props;
+    }
+}
+```
+
+Now create a method (`connect()`) which sets up the connection:
+
+```java
+import jakarta.mail.*;
+import com.sun.mail.imap.IMAPFolder;
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.table.data.RowData;
+
+public class ImapSource extends RichSourceFunction<RowData> {
+    // …
+
+    private transient Store store;
+    private transient IMAPFolder folder;
+
+    private void connect() throws Exception {
+        final Session session = Session.getInstance(getSessionProperties(), 
null);
+        store = session.getStore();
+        store.connect(options.getUser(), options.getPassword());
+
+        final Folder genericFolder = store.getFolder("INBOX");
+        folder = (IMAPFolder) genericFolder;
+
+        if (!folder.isOpen()) {
+            folder.open(Folder.READ_ONLY);
+        }
+    }
+}
+```
+
+You can now use this method to connect to the mail server when the source is 
created. Create a loop to keep the source running while collecting email 
counts. Lastly, implement methods to cancel and close the connection:
+
+```java
+import jakarta.mail.*;
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.streaming.api.functions.source.SourceFunction;
+import org.apache.flink.table.data.RowData;
+
+public class ImapSource extends RichSourceFunction<RowData> {
+    private transient volatile boolean running = false;
+
+    // …
+
+    @Override
+    public void run(SourceFunction.SourceContext<RowData> ctx) throws 
Exception {
+        connect();
+        running = true;
+
+        // TODO: Listen for new messages
+
+        while (running) {
+            // Trigger some IMAP request to force the server to send a 
notification
+            folder.getMessageCount();
+            Thread.sleep(250);
+        }
+    }
+
+    @Override
+    public void cancel() {
+        running = false;
+    }
+
+    @Override
+    public void close() throws Exception {
+        if (folder != null) {
+            folder.close();
+        }
+
+        if (store != null) {
+            store.close();
+        }
+    }
+}
+```
+
+There is a request trigger to the server in every loop iteration. This is 
crucial as it ensures that the server will keep sending notifications. A more 
sophisticated approach would be to make use of the IDLE protocol.
+
+<div class="note">
+  <h5>Note</h5>
+  <p>Since the source is not checkpointable, no state fault tolerance will be 
possible.</p>
+</div>
+
+
+## Collect incoming emails
+
+Now you need to listen for new emails arriving in the inbox folder and collect 
them. To begin, hardcode the schema and only return the email’s subject. 
Fortunately, Jakarta provides a simple hook (`addMessageCountListener()`) to 
get notified when new messages arrive on the server. You can use this in place 
of the “TODO” comment above:
+
+```java
+import jakarta.mail.*;
+import jakarta.mail.event.MessageCountAdapter;
+import jakarta.mail.event.MessageCountEvent;
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.RowData;
+
+public class ImapSource extends RichSourceFunction<RowData> {
+    @Override
+    public void run(SourceFunction.SourceContext<RowData> ctx) throws 
Exception {
+        // …
+
+        folder.addMessageCountListener(new MessageCountAdapter() {
+            @Override
+            public void messagesAdded(MessageCountEvent e) {
+                collectMessages(ctx, e.getMessages());
+            }
+        });
+
+        // …
+    }
+
+    private void collectMessages(SourceFunction.SourceContext<RowData> ctx, 
Message[] messages) {
+        for (Message message : messages) {
+            try {
+                
ctx.collect(GenericRowData.of(StringData.fromString(message.getSubject())));
+            } catch (MessagingException ignored) {}
+        }
+    }
+}
+```
+
+Now build the project again and start up the SQL client:
+
+```sh
+$ cd testing/
+$ ./build_and_run.sh
+```
+
+This time, you will connect to a [GreenMail 
server](https://greenmail-mail-test.github.io/greenmail/) which is started as 
part of the 
[setup](https://github.com/Airblader/blog-imap/blob/master/testing/docker-compose.yaml):
+
+```sql
+CREATE TABLE T (
+    subject STRING
+) WITH (
+    'connector' = 'imap',
+    'host' = 'greenmail',
+    'port' = '3143',
+    'user' = 'alice',
+    'password' = 'alice'
+);
+
+SELECT * FROM T;
+```
+
+The query above should now run continuously but no rows will be produced since 
it is a test server. You need to first send an email to the server. If you have 
[mailx](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/mailx.html) 
installed, you can do so by executing in your terminal:
+
+```sh
+$ echo "This is the email body" | mailx -Sv15-compat \
+        -s"Email Subject" \
+        -Smta="smtp://alice:alice@localhost:3025" \
+        al...@acme.org
+```
+
+The row “Email Subject” should now have appeared as a row in your output. Your 
source connector is working!
+
+However, since you are still hard-coding the schema produced by the source, 
defining the table with a different schema will produce errors. You want to be 
able to define which fields of an email interest you and then produce the data 
accordingly. To do this, you will use the list of column names from earlier and 
then look at it when you collect the emails.
+
+```java
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.TimestampData;
+
+public class ImapSource extends RichSourceFunction<RowData> {
+
+    private void collectMessages(SourceFunction.SourceContext<RowData> ctx, 
Message[] messages) {
+        for (Message message : messages) {
+            try {
+                collectMessage(ctx, message);
+            } catch (MessagingException ignored) {}
+        }
+    }
+
+    private void collectMessage(SourceFunction.SourceContext<RowData> ctx, 
Message message)
+        throws MessagingException {
+        final GenericRowData row = new GenericRowData(columnNames.size());
+
+        for (int i = 0; i < columnNames.size(); i++) {
+            switch (columnNames.get(i)) {
+                case "SUBJECT":
+                    row.setField(i, 
StringData.fromString(message.getSubject()));
+                    break;
+                case "SENT":
+                    row.setField(i, 
TimestampData.fromInstant(message.getSentDate().toInstant()));
+                    break;
+                case "RECEIVED":
+                    row.setField(i, 
TimestampData.fromInstant(message.getReceivedDate().toInstant()));
+                    break;
+                // ...
+            }
+        }
+
+        ctx.collect(row);
+    }
+}
+```
+
+You should now have a working source where you can select any of the columns 
that are supported. Try it out again in the SQL client, but this time 
specifying all the columns ("subject", "sent", "received") supported above:
+
+```sql
+CREATE TABLE T (
+    subject STRING,
+    sent TIMESTAMP(3),
+    received TIMESTAMP(3)
+) WITH (
+    'connector' = 'imap',
+    'host' = 'greenmail',
+    'port' = '3143',
+    'user' = 'alice',
+    'password' = 'alice'
+);
+
+SELECT * FROM T;
+```
+
+Use the `mailx` command from earlier to send emails to the GreenMail server 
and you should see them appear. You can also try selecting only some of the 
columns, or write more complex queries.
+
+
+# Test the connector with a real mail server on the Ververica Platform
+
+If you want to test the connector with a real mail server, you can import it 
into [Ververica Platform Community 
Edition](https://www.ververica.com/getting-started). To begin, make sure that 
you have the Ververica Platform up and running.
+
+Since the example connector in this blog post is still a bit limited, you will 
use the finished connector in this 
[repository](github.com/TNG/flink-connector-email) instead. You can clone that 
repository and build it the same way to obtain the JAR file.
+
+For this example, let's connect to a Gmail account. This requires SSL and 
comes with an additional caveat that you need to enable two-factor 
authentication and create an application password to use instead of your real 
password.
+
+First, head to SQL → Connectors. There you can create a new connector by 
uploading your JAR file. The platform will detect the connector options 
automatically. Afterwards, go back to the SQL Editor and you should now be able 
to use the connector.
+
+<div class="row front-graphic">
+  <img src="{{ site.baseurl 
}}/img/blog/2021-09-07-connector-table-sql-api/VVP-SQL-Editor.png" 
alt="Ververica Platform - SQL Editor"/>
+       <p class="align-center">Ververica Platform - SQL Editor</p>
+</div>
+
+
+# Summary
+
+Apache Flink is designed for easy extensibility and allows users to access 
many different external systems as data sources or sinks through a versatile 
set of connectors. It can read and write data from databases, local and 
distributed file systems.
+
+Flink also exposes APIs on top of which custom connectors can be built. In 
this two-part blog series, you explored some of these APIs and concepts and 
learned how to implement your own custom source connector that can read in data 
from an email inbox. You then used Flink to process incoming emails through the 
IMAP protocol and wrote some Flink SQL.
diff --git a/css/flink.css b/css/flink.css
index 5c0f621..1da05b6 100755
--- a/css/flink.css
+++ b/css/flink.css
@@ -249,6 +249,18 @@ img.illu {margin:40px auto 60px;display:block;}
 .committer-avatar {
        width: 50px;
 }
+
+.note {
+    padding: 8px;
+    margin-bottom: 32px;
+    background-color: lightblue;
+}
+
+.note > h1, h2, h3, h4, h5, h6 {
+    margin-top: initial;
+    padding-top: initial;
+}
+
 /*=============================================================================
                                 Powered By Carousel
 =============================================================================*/
diff --git a/img/blog/2021-09-07-connector-table-sql-api/VVP-SQL-Editor.png 
b/img/blog/2021-09-07-connector-table-sql-api/VVP-SQL-Editor.png
new file mode 100644
index 0000000..075b70b
Binary files /dev/null and 
b/img/blog/2021-09-07-connector-table-sql-api/VVP-SQL-Editor.png differ
diff --git a/img/blog/2021-09-07-connector-table-sql-api/flink-sql-client.png 
b/img/blog/2021-09-07-connector-table-sql-api/flink-sql-client.png
new file mode 100644
index 0000000..2b7858c
Binary files /dev/null and 
b/img/blog/2021-09-07-connector-table-sql-api/flink-sql-client.png differ

Reply via email to