infoverload commented on a change in pull request #460:
URL: https://github.com/apache/flink-web/pull/460#discussion_r701043945



##########
File path: _posts/2021-08-16-connector-table-sql-api-part1.md
##########
@@ -0,0 +1,241 @@
+---
+layout: post
+title:  "Implementing a Custom Source Connector for Table API and SQL - Part 
One "
+date: 2021-08-27T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+excerpt: 
+---
+
+{% toc %} 
+
+# Introduction
+
+Apache Flink is a data processing engine that aims to keep 
[state](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/state/state_backends/)
 locally in order to do computations efficiently. However, Flink does not "own" 
the data but relies on external systems to ingest and persist data. Connecting 
to external data input (**sources**) and external data storage (**sinks**) is 
usually summarized under the term **connectors** in Flink. 
+
+Since connectors are such important components, Flink ships with [connectors 
for some popular 
systems](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/overview/).
 But sometimes you may need to read in an uncommon data format and what Flink 
provides is not enough. This is why Flink also provides [extension points](#) 
for building custom connectors if you want to connect to a system that is not 
supported by an existing connector.   

Review comment:
       Is there a good page I should link it to?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to