JingsongLi commented on a change in pull request #9802: 
[FLINK-13361][documention] Add documentation for JDBC connector for Table API & 
SQL
URL: https://github.com/apache/flink/pull/9802#discussion_r332918860
 
 

 ##########
 File path: docs/dev/table/connect.md
 ##########
 @@ -1075,6 +1075,88 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### JDBC Connector
+
+<span class="label label-primary">Source: Batch</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Append Mode</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+<span class="label label-primary">Temporal Join: Sync Mode</span>
+
+The JDBC connector allows for reading from an JDBC client.
+The JDBC connector allows for writing into an JDBC client.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-jdbc{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+And must also specify JDBC library, for example, if want to use Mysql library, 
the following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+    <groupId>mysql</groupId>
+    <artifactId>mysql-connector-java</artifactId>
+    <version>8.0.17</version>
+</dependency>
+{% endhighlight %}
+
+**Library support:** Now, we only support mysql, derby, postgres.
+
+The connector can be defined as follows:
+
+<div data-lang="DDL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  ...
+) WITH (
+  'connector.type' = 'jdbc', -- required: specify this table type is jdbc
+  
+  'connector.url' = 'jdbc:derby:memory:upsert', -- required: JDBC DB url
+  
+  'connector.table' = 'jdbc_table_name',  -- required: jdbc table name
+  
+  'connector.driver' = 'driver', -- optional: jdbc driver, if not set, it will 
automatically guess based on the URL.
+
+  'connector.username' = 'name', -- optional: jdbc user name and password
+  'connector.password' = 'password',
+  
+  -- scan options, optional, used when reading from table
+  'connector.read.partition.column' = 'column_name', -- optional, name of the 
column used for partitioning the input.
+  'connector.read.partition.num' = '50', -- optional, the largest value of the 
last partition.
+  'connector.read.partition.lower-bound' = '500', -- optional, the smallest 
value of the first partition.
+  'connector.read.fetch-size' = '100', -- optional, the maximum number of 
partitions that can be used for parallelism in table reading.
 
 Review comment:
   This is for JDBC `Statement.setFetchSize`, I think we can just use its 
comments, the important thing is that it is just a hint.
   
   > Gives the reader a hint as to the number of rows that should be fetched 
from the database when reading. If the value specified is zero, then the hint 
is ignored. The default value is zero.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to