This is an automated email from the ASF dual-hosted git repository.

danderson pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
     new 00785b00b84 [FLINK-33587][docs] Tidy up docs around JDBC
00785b00b84 is described below

commit 00785b00b84dadae4c8b33a35ca9a5ac48e4a4c6
Author: Robin Moffatt <ro...@rmoff.net>
AuthorDate: Mon Nov 27 14:47:13 2023 +0000

    [FLINK-33587][docs] Tidy up docs around JDBC
---
 .../hive-compatibility/hive-dialect/overview.md    |   4 +-
 .../dev/table/hive-compatibility/hiveserver2.md    | 316 ---------------------
 docs/content/docs/dev/table/jdbcDriver.md          |  74 +++--
 .../docs/dev/table/sql-gateway/hiveserver2.md      | 297 ++++++++++++++++++-
 .../content/docs/dev/table/sql-gateway/overview.md |   2 +-
 5 files changed, 343 insertions(+), 350 deletions(-)

diff --git 
a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md 
b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
index 4fc930e1bc2..1eba857f073 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
@@ -37,7 +37,7 @@ statement you execute. There's no need to restart a session 
to use a different d
 
 - To use Hive dialect, you have to add dependencies related to Hive. Please 
refer to [Hive dependencies]({{< ref "docs/connectors/table/hive/overview" 
>}}#dependencies) for how to add the dependencies.
 - Please make sure the current catalog is [HiveCatalog]({{< ref 
"docs/connectors/table/hive/hive_catalog" >}}). Otherwise, it will fall back to 
Flink's `default` dialect.
-  When using SQL Gateway configured with [HiveServer2 Endpoint]({{< ref 
"docs/dev/table/hive-compatibility/hiveserver2" >}}), the current catalog will 
be a HiveCatalog by default.
+  When using SQL Gateway configured with [HiveServer2 Endpoint]({{< ref 
"docs/dev/table/sql-gateway/hiveserver2" >}}), the current catalog will be a 
HiveCatalog by default.
 - In order to have better syntax and semantic compatibility, it’s highly 
recommended to load [HiveModule]({{< ref 
"docs/connectors/table/hive/hive_functions" 
>}}#use-hive-built-in-functions-via-hivemodule) and
   place it first in the module list, so that Hive built-in functions can be 
picked up during function resolution.
   Please refer [here]({{< ref "docs/dev/table/modules" 
>}}#how-to-load-unload-use-and-list-modules) for how to change resolution order.
@@ -64,7 +64,7 @@ Flink SQL> SET table.sql-dialect = default; -- to use Flink 
default dialect
 
 ### SQL Gateway Configured With HiveServer2 Endpoint
 
-When using the SQL Gateway configured with HiveServer2 Endpoint, the dialect 
will be Hive dialect by default, so you don't need to do anything if you want 
to use Hive dialect. But you can still
+When using the [SQL Gateway configured with HiveServer2 Endpoint]({{<ref 
"docs/dev/table/sql-gateway/hiveserver2">}}), the dialect will be Hive dialect 
by default, so you don't need to do anything if you want to use Hive dialect. 
But you can still
 change the dialect to Flink default dialect.
 
 ```bash
diff --git a/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md 
b/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md
deleted file mode 100644
index 9d6dc46d143..00000000000
--- a/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md
+++ /dev/null
@@ -1,316 +0,0 @@
----
-title: HiveServer2 Endpoint
-weight: 1
-type: docs
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# HiveServer2 Endpoint
-
-[Flink SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}) 
supports deploying as a HiveServer2 Endpoint which is compatible with 
[HiveServer2](https://cwiki.apache.org/confluence/display/hive/hiveserver2+overview)
 
-wire protocol and allows users to interact (e.g. submit Hive SQL) with Flink 
SQL Gateway with existing Hive clients, such as Hive JDBC, Beeline, DBeaver, 
Apache Superset and so on.
-
-Setting Up
-----------------
-Before the trip of the SQL Gateway with the HiveServer2 Endpoint, please 
prepare the required [dependencies]({{< ref 
"docs/connectors/table/hive/overview#dependencies" >}}).
-
-### Configure HiveServer2 Endpoint
-
-The HiveServer2 Endpoint is not the default endpoint for the SQL Gateway. You 
can configure to use the HiveServer2 Endpoint by calling 
-```bash
-$ ./bin/sql-gateway.sh start -Dsql-gateway.endpoint.type=hiveserver2 
-Dsql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir=<path to hive conf>
-```
-
-or add the following configuration into `conf/flink-conf.yaml` (please replace 
the `<path to hive conf>` with your hive conf path).
-
-```yaml
-sql-gateway.endpoint.type: hiveserver2
-sql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir: <path to hive conf>
-```
-
-### Connecting to HiveServer2 
-
-After starting the SQL Gateway, you are able to submit SQL with Apache Hive 
Beeline.
-
-```bash
-$ ./beeline
-SLF4J: Class path contains multiple SLF4J bindings.
-SLF4J: Found binding in 
[jar:file:/Users/ohmeatball/Work/hive-related/apache-hive-2.3.9-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
-SLF4J: Found binding in 
[jar:file:/usr/local/Cellar/hadoop/3.2.1_1/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
-SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
-SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
-Beeline version 2.3.9 by Apache Hive
-beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl
-Connecting to jdbc:hive2://localhost:10000/default;auth=noSasl
-Enter username for jdbc:hive2://localhost:10000/default:
-Enter password for jdbc:hive2://localhost:10000/default:
-Connected to: Apache Flink (version 1.16)
-Driver: Hive JDBC (version 2.3.9)
-Transaction isolation: TRANSACTION_REPEATABLE_READ
-0: jdbc:hive2://localhost:10000/default> CREATE TABLE Source (
-. . . . . . . . . . . . . . . . . . . .> a INT,
-. . . . . . . . . . . . . . . . . . . .> b STRING
-. . . . . . . . . . . . . . . . . . . .> );
-+---------+
-| result  |
-+---------+
-| OK      |
-+---------+
-0: jdbc:hive2://localhost:10000/default> CREATE TABLE Sink (
-. . . . . . . . . . . . . . . . . . . .> a INT,
-. . . . . . . . . . . . . . . . . . . .> b STRING
-. . . . . . . . . . . . . . . . . . . .> );
-+---------+
-| result  |
-+---------+
-| OK      |
-+---------+
-0: jdbc:hive2://localhost:10000/default> INSERT INTO Sink SELECT * FROM 
Source; 
-+-----------------------------------+
-|              job id               |
-+-----------------------------------+
-| 55ff290b57829998ea6e9acc240a0676  |
-+-----------------------------------+
-1 row selected (2.427 seconds)
-```
-
-Endpoint Options
-----------------
-
-Below are the options supported when creating a HiveServer2 Endpoint instance 
with YAML file or DDL.
-
-<table class="configuration table table-bordered">
-    <thead>
-        <tr>
-            <th class="text-left" style="width: 20%">Key</th>
-            <th class="text-center" style="width: 8%">Required</th>
-            <th class="text-left" style="width: 7%">Default</th>
-            <th class="text-left" style="width: 10%">Type</th>
-            <th class="text-left" style="width: 55%">Description</th>
-        </tr>
-    </thead>
-    <tbody>
-        <tr>
-            <td><h5>sql-gateway.endpoint.type</h5></td>
-            <td>required</td>
-            <td style="word-wrap: break-word;">"rest"</td>
-            <td>List&lt;String&gt;</td>
-            <td>Specify which endpoint to use, here should be 
'hiveserver2'.</td>
-        </tr>
-        <tr>
-            
<td><h5>sql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir</h5></td>
-            <td>required</td>
-            <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
-            <td>URI to your Hive conf dir containing hive-site.xml. The URI 
needs to be supported by Hadoop FileSystem. If the URI is relative, i.e. 
without a scheme, local file system is assumed. If the option is not specified, 
hive-site.xml is searched in class path.</td>
-        </tr>
-        <tr>
-            
<td><h5>sql-gateway.endpoint.hiveserver2.catalog.default-database</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">"default"</td>
-            <td>String</td>
-            <td>The default database to use when the catalog is set as the 
current catalog.</td>
-        </tr>
-        <tr>
-            <td><h5>sql-gateway.endpoint.hiveserver2.catalog.name</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">"hive"</td>
-            <td>String</td>
-            <td>Name for the pre-registered hive catalog.</td>
-        </tr>
-        <tr>
-            <td><h5>sql-gateway.endpoint.hiveserver2.module.name</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">"hive"</td>
-            <td>String</td>
-            <td>Name for the pre-registered hive module.</td>
-        </tr>
-        <tr>
-            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.exponential.backoff.slot.length</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">100 ms</td>
-            <td>Duration</td>
-            <td>Binary exponential backoff slot time for Thrift clients during 
login to HiveServer2,for retries until hitting Thrift client timeout</td>
-        </tr>
-        <tr>
-            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.host</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
-            <td>The server address of HiveServer2 host to be used for 
communication.Default is empty, which means the to bind to the localhost. This 
is only necessary if the host has multiple network addresses.</td>
-        </tr>
-        <tr>
-            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.login.timeout</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">20 s</td>
-            <td>Duration</td>
-            <td>Timeout for Thrift clients during login to HiveServer2</td>
-        </tr>
-        <tr>
-            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.max.message.size</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">104857600</td>
-            <td>Long</td>
-            <td>Maximum message size in bytes a HS2 server will accept.</td>
-        </tr>
-        <tr>
-            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.port</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">10000</td>
-            <td>Integer</td>
-            <td>The port of the HiveServer2 endpoint.</td>
-        </tr>
-        <tr>
-            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.keepalive-time</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">1 min</td>
-            <td>Duration</td>
-            <td>Keepalive time for an idle worker thread. When the number of 
workers exceeds min workers, excessive threads are killed after this time 
interval.</td>
-        </tr>
-        <tr>
-            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.threads.max</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">512</td>
-            <td>Integer</td>
-            <td>The maximum number of Thrift worker threads</td>
-        </tr>
-        <tr>
-            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.threads.min</h5></td>
-            <td>optional</td>
-            <td style="word-wrap: break-word;">5</td>
-            <td>Integer</td>
-            <td>The minimum number of Thrift worker threads</td>
-        </tr>
-    </tbody>
-</table>
-
-HiveServer2 Protocol Compatibility
-----------------
-
-The Flink SQL Gateway with HiveServer2 Endpoint aims to provide the same 
experience compared to the HiveServer2 of Apache Hive.
-Therefore, HiveServer2 Endpoint automatically initialize the environment to 
have more consistent experience for Hive users:
-- create the [Hive Catalog]({{< ref 
"docs/connectors/table/hive/hive_catalog.md" >}}) as the default catalog;
-- use Hive built-in function by loading Hive function module and place it 
first in the [function module]({{< ref "docs/dev/table/modules/index.md" >}}) 
list;
-- switch to the Hive dialect (`table.sql-dialect = hive`);
-- switch to batch execution mode (`execution.runtime-mode = BATCH`);
-- execute DML statements (e.g. INSERT INTO) blocking and one by one 
(`table.dml-sync = true`).
-
-With these essential prerequisites, you can submit the Hive SQL in Hive style 
but execute it in the Flink environment.
-
-Clients & Tools
-----------------
-
-The HiveServer2 Endpoint is compatible with the HiveServer2 wire protocol. 
Therefore, the tools that manage the Hive SQL also work for
-the SQL Gateway with the HiveServer2 Endpoint. Currently, Hive JDBC, Hive 
Beeline, Dbeaver, Apache Superset and so on are tested to be able to connect to 
the
-Flink SQL Gateway with HiveServer2 Endpoint and submit SQL.
-
-### Hive JDBC
-
-SQL Gateway is compatible with HiveServer2. You can write a program that uses 
Hive JDBC to connect to SQL Gateway. To build the program, add the 
-following dependencies in your project pom.xml.
-
-```xml
-<dependency>
-    <groupId>org.apache.hive</groupId>
-    <artifactId>hive-jdbc</artifactId>
-    <version>${hive.version}</version>
-</dependency>
-```
-
-After reimport the dependencies, you can use the following program to connect 
and list tables in the Hive Catalog.
-
-```java
-
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.ResultSet;
-import java.sql.Statement;
-
-public class JdbcConnection {
-    public static void main(String[] args) throws Exception {
-        try (
-                // Please replace the JDBC URI with your actual host, port and 
database.
-                Connection connection = 
DriverManager.getConnection("jdbc:hive2://{host}:{port}/{database};auth=noSasl");
 
-                Statement statement = connection.createStatement()) {
-            statement.execute("SHOW TABLES");
-            ResultSet resultSet = statement.getResultSet();
-            while (resultSet.next()) {
-                System.out.println(resultSet.getString(1));
-            }
-        }
-    }
-}
-```
-
-### DBeaver
-
-DBeaver uses Hive JDBC to connect to the HiveServer2. So DBeaver can connect 
to the Flink SQL Gateway to submit Hive SQL. Considering the
-API compatibility, you can connect to the Flink SQL Gateway like HiveServer2. 
Please refer to the 
[guidance](https://github.com/dbeaver/dbeaver/wiki/Apache-Hive)
-about how to use DBeaver to connect to the Flink SQL Gateway with the 
HiveServer2 Endpoint.
-
-<span class="label label-danger">Attention</span> Currently, HiveServer2 
Endpoint doesn't support authentication. Please use 
-the following JDBC URL to connect to the DBeaver:
-
-```bash
-jdbc:hive2://{host}:{port}/{database};auth=noSasl
-```
-
-After the setup, you can explore Flink with DBeaver.
-
-{{< img width="80%" src="/fig/dbeaver.png" alt="DBeaver" >}}
-
-### Apache Superset
-
-Apache Superset is a powerful data exploration and visualization platform. 
With the API compatibility, you can connect 
-to the Flink SQL Gateway like Hive. Please refer to the 
[guidance](https://superset.apache.org/docs/databases/hive) for more details.
-
-{{< img width="80%" src="/fig/apache_superset.png" alt="Apache Superset" >}}
-
-<span class="label label-danger">Attention</span> Currently, HiveServer2 
Endpoint doesn't support authentication. Please use
-the following JDBC URL to connect to the Apache Superset:
-
-```bash
-hive://hive@{host}:{port}/{database}?auth=NOSASL
-```
-
-Streaming SQL
-----------------
-
-Flink is a batch-streaming unified engine. You can switch to the streaming SQL 
with the following SQL
-
-```bash
-SET table.sql-dialect=default; 
-SET execution.runtime-mode=streaming; 
-SET table.dml-sync=false;
-```
-
-After that, the environment is ready to parse the Flink SQL, optimize with the 
streaming planner and submit the job in async mode.
-
-{{< hint info >}}
-Notice: The `RowKind` in the HiveServer2 API is always `INSERT`. Therefore, 
HiveServer2 Endpoint doesn't support
-to present the CDC data.
-{{< /hint >}}
-
-Supported Types
-----------------
-
-The HiveServer2 Endpoint is built on the Hive2 now and supports all Hive2 
available types. For Hive-compatible tables, the HiveServer2 Endpoint
-obeys the same rule as the HiveCatalog to convert the Flink types to Hive 
Types and serialize them to the thrift object. Please refer to
-the [HiveCatalog]({{< ref 
"docs/connectors/table/hive/hive_catalog#supported-types" >}}) for the type 
mappings.
\ No newline at end of file
diff --git a/docs/content/docs/dev/table/jdbcDriver.md 
b/docs/content/docs/dev/table/jdbcDriver.md
index 31a7a44ddba..ad23a440c25 100644
--- a/docs/content/docs/dev/table/jdbcDriver.md
+++ b/docs/content/docs/dev/table/jdbcDriver.md
@@ -1,5 +1,5 @@
 ---
-title: "SQL JDBC Driver"
+title: "Flink JDBC Driver"
 weight: 91
 type: docs
 aliases:
@@ -26,11 +26,15 @@ under the License.
 
 # Flink JDBC Driver
 
-Flink JDBC Driver is a Java library for connecting and submitting SQL 
statements to [SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}) 
as the JDBC server.
+The Flink JDBC Driver is a Java library for enabling clients to send Flink SQL 
to your Flink cluster via the [SQL Gateway]({{< ref 
"docs/dev/table/sql-gateway/overview" >}}).
 
-# Usage
+You can also use the [Hive JDBC Driver]({{< ref 
"docs/dev/table/sql-gateway/hiveserver2#hive-jdbc" >}}) with Flink. This is 
beneficial if you are running [Hive dialect SQL]({{< ref 
"docs/dev/table/hive-compatibility/hive-dialect/overview">}}) and want to make 
use of the Hive Catalog. To use Hive JDBC with Flink you need to run the [SQL 
Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}) with the 
[HiveServer2 endpoint]({{<ref "docs/dev/table/sql-gateway/hiveserver2">}}).
 
-Before using Flink JDBC driver, you need to start a SQL Gateway as the JDBC 
server and binds it with your Flink cluster. We now assume that you have a 
gateway started and connected to a running Flink cluster.
+## Usage
+
+Before using the Flink JDBC driver you need to start a SQL Gateway with REST 
endpoint. This acts as the JDBC server and binds it with your Flink cluster.
+
+The examples below assume that you have a [gateway started]({{< ref 
"docs/dev/table/sql-gateway/overview#starting-the-sql-gateway" >}}) and 
connected to a running Flink cluster.
 
 ## Dependency
 
@@ -51,16 +55,23 @@ You can also add dependency of Flink JDBC driver in your 
maven or gradle project
     </dependency>
 ```
 
-## Use with a JDBC Tool
-### Use with Beeline
+## JDBC Clients
+
+The Flink JDBC driver is not included with the Flink distribution. You can 
download it from 
[Maven](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-jdbc-driver-bundle/).
 
+
+You may also need the 
[SLF4J](https://repo1.maven.org/maven2/org/slf4j/slf4j-api/) 
(`slf4j-api-{slf4j.version}.jar`) jar.
+
+### Beeline
 
 Beeline is the command line tool for accessing [Apache 
Hive](https://hive.apache.org/), but it also supports general JDBC drivers. To 
install Hive and beeline, see [Hive 
documentation](https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-RunningHiveServer2andBeeline.1).
 
 1. Download flink-jdbc-driver-bundle-{VERSION}.jar from [download 
page](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-jdbc-driver-bundle/)
 and add it to `$HIVE_HOME/lib`.
 2. Run beeline and connect to a Flink SQL gateway. As Flink SQL gateway 
currently ignores user names and passwords, just leave them empty.
-    ```
+
+    ```sql
     beeline> !connect jdbc:flink://localhost:8083
     ```
+
 3. Execute any statement you want.
 
 **Sample Commands**
@@ -99,23 +110,33 @@ No rows affected (0.108 seconds)
 0: jdbc:flink://localhost:8083> 
 ```
 
-### Use with SqlLine
+### SQLLine
 
-[SqlLine](https://github.com/julianhyde/sqlline) is a lightweight JDBC command 
line tool, it supports general JDBC drivers. You need to clone the codes from 
github and compile the project with mvn first.
+[SQLLine](https://github.com/julianhyde/sqlline) is a lightweight JDBC command 
line tool that supports general JDBC drivers. 
 
-1. Download flink-jdbc-driver-bundle-{VERSION}.jar from [download 
page](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-jdbc-driver-bundle/)
 and add it to `target` directory of SqlLine project. Notice that you need to 
copy slf4j-api-{slf4j.version}.jar to `target` which will be used by flink JDBC 
driver. 
-2. Run SqlLine with command `bin/sqlline` and connect to a Flink SQL gateway. 
As Flink SQL gateway currently ignores user names and passwords, just leave 
them empty.
-    ```
+To use SQLLine you will need to clone [the GitHub 
repository](https://github.com/julianhyde/sqlline) and compile the project 
first (`./mvnw package -DskipTests`).
+
+1. Download the following JARs and add them both to the `target` directory of 
SQLLine project:
+   1. [Flink JDBC 
Driver](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-jdbc-driver-bundle/)
 (`flink-jdbc-driver-bundle-{VERSION}.jar`) 
+   2. [SLF4J](https://repo1.maven.org/maven2/org/slf4j/slf4j-api/) 
(`slf4j-api-{slf4j.version}.jar`)
+2. Run SQLLine with command `./bin/sqlline` 
+3. From SQLLine, connect to a Flink SQL gateway using the `!connect` command. 
+
+    Since the Flink SQL gateway currently ignores user names and passwords 
just leave them empty.
+
+    ```sql
+       sqlline version 1.13.0-SNAPSHOT
     sqlline> !connect jdbc:flink://localhost:8083
+    Enter username for jdbc:flink://localhost:8083:
+    Enter password for jdbc:flink://localhost:8083:
+    0: jdbc:flink://localhost:8083>
     ```
-3. Execute any statement you want.
+
+4. You can now execute any Flink SQL statement you want.
 
 **Sample Commands**
-```
-sqlline version 1.12.0
-sqlline> !connect jdbc:flink://localhost:8083
-Enter username for jdbc:flink://localhost:8083:
-Enter password for jdbc:flink://localhost:8083:
+
+```sql
 0: jdbc:flink://localhost:8083> CREATE TABLE T(
 . . . . . . . . . . . . . . .)>      a INT,
 . . . . . . . . . . . . . . .)>      b VARCHAR(10)
@@ -125,6 +146,7 @@ Enter password for jdbc:flink://localhost:8083:
 . . . . . . . . . . . . . . .)>      'format' = 'csv'
 . . . . . . . . . . . . . . .)>  );
 No rows affected (0.122 seconds)
+
 0: jdbc:flink://localhost:8083> INSERT INTO T VALUES (1, 'Hi'), (2, 'Hello');
 +----------------------------------+
 |              job id              |
@@ -132,6 +154,7 @@ No rows affected (0.122 seconds)
 | fbade1ab4450fc57ebd5269fdf60dcfd |
 +----------------------------------+
 1 row selected (1.282 seconds)
+
 0: jdbc:flink://localhost:8083> SELECT * FROM T;
 +---+-------+
 | a |   b   |
@@ -143,7 +166,8 @@ No rows affected (0.122 seconds)
 0: jdbc:flink://localhost:8083>
 ```
 
-### Use with Tableau
+### Tableau
+
 [Tableau](https://www.tableau.com/) is an interactive data visualization 
software. It supports *Other Database (JDBC)* connection from version 2018.3. 
You'll need Tableau with version >= 2018.3 to use Flink JDBC driver. For 
general usage of *Other Database (JDBC)* in Tableau, see [Tableau 
documentation](https://help.tableau.com/current/pro/desktop/en-us/examples_otherdatabases_jdbc.htm).
 
 1. Download flink-jdbc-driver-(VERSION).jar from the [download 
page](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-jdbc-driver-bundle/)
 and add it to Tableau driver path.
@@ -155,13 +179,13 @@ No rows affected (0.122 seconds)
 
 ### Use with other JDBC Tools
 
-Any tool supporting JDBC API can be used with Flink JDBC driver and Flink SQL 
gateway. See the documentation of your desired tool on how to use a JDBC driver.
+Any tool supporting JDBC API can be used with Flink JDBC driver and Flink SQL 
gateway. See the documentation of your desired tool on how to use a custom JDBC 
driver.
 
 ## Use with Application
 
-### Use with Java
+### Java
 
-Flink JDBC driver is a library for accessing Flink clusters through the JDBC 
API. For the general usage of JDBC in Java, see [JDBC 
tutorial](https://docs.oracle.com/javase/tutorial/jdbc/index.html).
+The Flink JDBC driver is a library for accessing Flink clusters through the 
JDBC API. For the general usage of JDBC in Java, see [JDBC 
tutorial](https://docs.oracle.com/javase/tutorial/jdbc/index.html).
 
 1. Add the following dependency in pom.xml of project or download 
flink-jdbc-driver-bundle-{VERSION}.jar and add it to your classpath.
 2. Connect to a Flink SQL gateway in your Java code with specific url.
@@ -228,8 +252,8 @@ public class Sample {
 }
 ```
 
-### Use with Others
+### Other languages
 
-In addition to java, Flink JDBC driver can be used by any JVM language such as 
scala, kotlin and ect, you can add the dependency of Flink JDBC driver in your 
project and use it directly.
+In addition to Java, the Flink JDBC driver can be used by any JVM language 
such as Scala, Kotlin etc. Add the dependency of Flink JDBC driver in your 
project and use it directly.
 
-Most applications may use data access frameworks to access data, for example, 
JOOQ, MyBatis and Spring Data. You can config Flink JDBC driver in them to 
perform Flink queries on an exist Flink cluster, just like a regular database.
+Many applications access data in SQL databases, either directly, or through 
frameworks like JOOQ, MyBatis, and Spring Data. You can configure these 
applications and frameworks to use the Flink JDBC driver so that they perform 
SQL queries on a Flink cluster instead of a regular database.
diff --git a/docs/content/docs/dev/table/sql-gateway/hiveserver2.md 
b/docs/content/docs/dev/table/sql-gateway/hiveserver2.md
index f879e6ebfff..5ae5fe22e6b 100644
--- a/docs/content/docs/dev/table/sql-gateway/hiveserver2.md
+++ b/docs/content/docs/dev/table/sql-gateway/hiveserver2.md
@@ -3,7 +3,7 @@ title: HiveServer2 Endpoint
 weight: 3
 type: docs
 aliases:
-- /dev/table/sql-gateway/hiveserver2.html
+- /docs/dev/table/hive-compatibility/hiveserver2/
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -26,9 +26,294 @@ under the License.
 
 # HiveServer2 Endpoint
 
-HiveServer2 Endpoint is compatible with 
[HiveServer2](https://cwiki.apache.org/confluence/display/hive/hiveserver2+overview)
-wire protocol and allows users to interact (e.g. submit Hive SQL) with Flink 
SQL Gateway with existing Hive clients, such as Hive JDBC, Beeline, DBeaver, 
Apache Superset and so on.
+The [Flink SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}) 
supports deploying as a HiveServer2 Endpoint which is compatible with 
[HiveServer2](https://cwiki.apache.org/confluence/display/hive/hiveserver2+overview)
 wire protocol. This allows users to submit Hive-dialect SQL through the Flink 
SQL Gateway with existing Hive clients using Thrift or the Hive JDBC driver. 
These clients include Beeline, DBeaver, Apache Superset and so on.
 
-It suggests to use HiveServer2 Endpoint with Hive Catalog and Hive dialect to 
get the same experience
-as HiveServer2. Please refer to the [Hive Compatibility]({{< ref 
"docs/dev/table/hive-compatibility/hiveserver2" >}})
-for more details. 
+It is recommended to use the HiveServer2 Endpoint with a Hive Catalog and Hive 
dialect to get the same experience as HiveServer2. Please refer to [Hive 
Dialect]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/overview" >}}) 
for more details.
+
+Setting Up
+----------------
+Before the trip of the SQL Gateway with the HiveServer2 Endpoint, please 
prepare the required [dependencies]({{< ref 
"docs/connectors/table/hive/overview#dependencies" >}}).
+
+### Configure HiveServer2 Endpoint
+
+The HiveServer2 Endpoint is not the default endpoint for the SQL Gateway. You 
can configure to use the HiveServer2 Endpoint by calling 
+```bash
+$ ./bin/sql-gateway.sh start -Dsql-gateway.endpoint.type=hiveserver2 
-Dsql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir=<path to hive conf>
+```
+
+or add the following configuration into `conf/flink-conf.yaml` (please replace 
the `<path to hive conf>` with your hive conf path).
+
+```yaml
+sql-gateway.endpoint.type: hiveserver2
+sql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir: <path to hive conf>
+```
+
+### Connecting to HiveServer2 
+
+After starting the SQL Gateway, you are able to submit SQL with Apache Hive 
Beeline.
+
+```bash
+$ ./beeline
+SLF4J: Class path contains multiple SLF4J bindings.
+SLF4J: Found binding in 
[jar:file:/Users/ohmeatball/Work/hive-related/apache-hive-2.3.9-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: Found binding in 
[jar:file:/usr/local/Cellar/hadoop/3.2.1_1/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
+SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
+Beeline version 2.3.9 by Apache Hive
+beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl
+Connecting to jdbc:hive2://localhost:10000/default;auth=noSasl
+Enter username for jdbc:hive2://localhost:10000/default:
+Enter password for jdbc:hive2://localhost:10000/default:
+Connected to: Apache Flink (version 1.16)
+Driver: Hive JDBC (version 2.3.9)
+Transaction isolation: TRANSACTION_REPEATABLE_READ
+0: jdbc:hive2://localhost:10000/default> CREATE TABLE Source (
+. . . . . . . . . . . . . . . . . . . .> a INT,
+. . . . . . . . . . . . . . . . . . . .> b STRING
+. . . . . . . . . . . . . . . . . . . .> );
++---------+
+| result  |
++---------+
+| OK      |
++---------+
+0: jdbc:hive2://localhost:10000/default> CREATE TABLE Sink (
+. . . . . . . . . . . . . . . . . . . .> a INT,
+. . . . . . . . . . . . . . . . . . . .> b STRING
+. . . . . . . . . . . . . . . . . . . .> );
++---------+
+| result  |
++---------+
+| OK      |
++---------+
+0: jdbc:hive2://localhost:10000/default> INSERT INTO Sink SELECT * FROM 
Source; 
++-----------------------------------+
+|              job id               |
++-----------------------------------+
+| 55ff290b57829998ea6e9acc240a0676  |
++-----------------------------------+
+1 row selected (2.427 seconds)
+```
+
+Endpoint Options
+----------------
+
+Below are the options supported when creating a HiveServer2 Endpoint instance 
with YAML file or DDL.
+
+<table class="configuration table table-bordered">
+    <thead>
+        <tr>
+            <th class="text-left" style="width: 20%">Key</th>
+            <th class="text-center" style="width: 8%">Required</th>
+            <th class="text-left" style="width: 7%">Default</th>
+            <th class="text-left" style="width: 10%">Type</th>
+            <th class="text-left" style="width: 55%">Description</th>
+        </tr>
+    </thead>
+    <tbody>
+        <tr>
+            <td><h5>sql-gateway.endpoint.type</h5></td>
+            <td>required</td>
+            <td style="word-wrap: break-word;">"rest"</td>
+            <td>List&lt;String&gt;</td>
+            <td>Specify which endpoint to use, here should be 
'hiveserver2'.</td>
+        </tr>
+        <tr>
+            
<td><h5>sql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir</h5></td>
+            <td>required</td>
+            <td style="word-wrap: break-word;">(none)</td>
+            <td>String</td>
+            <td>URI to your Hive conf dir containing hive-site.xml. The URI 
needs to be supported by Hadoop FileSystem. If the URI is relative, i.e. 
without a scheme, local file system is assumed. If the option is not specified, 
hive-site.xml is searched in class path.</td>
+        </tr>
+        <tr>
+            
<td><h5>sql-gateway.endpoint.hiveserver2.catalog.default-database</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">"default"</td>
+            <td>String</td>
+            <td>The default database to use when the catalog is set as the 
current catalog.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.catalog.name</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">"hive"</td>
+            <td>String</td>
+            <td>Name for the pre-registered hive catalog.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.module.name</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">"hive"</td>
+            <td>String</td>
+            <td>Name for the pre-registered hive module.</td>
+        </tr>
+        <tr>
+            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.exponential.backoff.slot.length</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">100 ms</td>
+            <td>Duration</td>
+            <td>Binary exponential backoff slot time for Thrift clients during 
login to HiveServer2,for retries until hitting Thrift client timeout</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.host</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">(none)</td>
+            <td>String</td>
+            <td>The server address of HiveServer2 host to be used for 
communication.Default is empty, which means the to bind to the localhost. This 
is only necessary if the host has multiple network addresses.</td>
+        </tr>
+        <tr>
+            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.login.timeout</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">20 s</td>
+            <td>Duration</td>
+            <td>Timeout for Thrift clients during login to HiveServer2</td>
+        </tr>
+        <tr>
+            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.max.message.size</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">104857600</td>
+            <td>Long</td>
+            <td>Maximum message size in bytes a HS2 server will accept.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.port</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">10000</td>
+            <td>Integer</td>
+            <td>The port of the HiveServer2 endpoint.</td>
+        </tr>
+        <tr>
+            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.keepalive-time</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">1 min</td>
+            <td>Duration</td>
+            <td>Keepalive time for an idle worker thread. When the number of 
workers exceeds min workers, excessive threads are killed after this time 
interval.</td>
+        </tr>
+        <tr>
+            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.threads.max</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">512</td>
+            <td>Integer</td>
+            <td>The maximum number of Thrift worker threads</td>
+        </tr>
+        <tr>
+            
<td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.threads.min</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">5</td>
+            <td>Integer</td>
+            <td>The minimum number of Thrift worker threads</td>
+        </tr>
+    </tbody>
+</table>
+
+HiveServer2 Protocol Compatibility
+----------------
+
+The Flink SQL Gateway with HiveServer2 Endpoint aims to provide the same 
experience compared to the HiveServer2 of Apache Hive.
+Therefore, HiveServer2 Endpoint automatically initialize the environment to 
have more consistent experience for Hive users:
+- create the [Hive Catalog]({{< ref 
"docs/connectors/table/hive/hive_catalog.md" >}}) as the default catalog;
+- use Hive built-in function by loading Hive function module and place it 
first in the [function module]({{< ref "docs/dev/table/modules/index.md" >}}) 
list;
+- switch to the Hive dialect (`table.sql-dialect = hive`);
+- switch to batch execution mode (`execution.runtime-mode = BATCH`);
+- execute DML statements (e.g. INSERT INTO) blocking and one by one 
(`table.dml-sync = true`).
+
+With these essential prerequisites, you can submit the Hive SQL in Hive style 
but execute it in the Flink environment.
+
+Clients & Tools
+----------------
+
+The HiveServer2 Endpoint is compatible with the HiveServer2 wire protocol. 
Therefore, the tools that manage the Hive SQL also work for
+the SQL Gateway with the HiveServer2 Endpoint. Currently, Hive JDBC, Hive 
Beeline, Dbeaver, Apache Superset and so on are tested to be able to connect to 
the
+Flink SQL Gateway with HiveServer2 Endpoint and submit SQL.
+
+### Hive JDBC
+
+SQL Gateway is compatible with HiveServer2. You can write a program that uses 
Hive JDBC to connect to SQL Gateway. To build the program, add the 
+following dependencies in your project pom.xml.
+
+```xml
+<dependency>
+    <groupId>org.apache.hive</groupId>
+    <artifactId>hive-jdbc</artifactId>
+    <version>${hive.version}</version>
+</dependency>
+```
+
+After reimport the dependencies, you can use the following program to connect 
and list tables in the Hive Catalog.
+
+```java
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.Statement;
+
+public class JdbcConnection {
+    public static void main(String[] args) throws Exception {
+        try (
+                // Please replace the JDBC URI with your actual host, port and 
database.
+                Connection connection = 
DriverManager.getConnection("jdbc:hive2://{host}:{port}/{database};auth=noSasl");
 
+                Statement statement = connection.createStatement()) {
+            statement.execute("SHOW TABLES");
+            ResultSet resultSet = statement.getResultSet();
+            while (resultSet.next()) {
+                System.out.println(resultSet.getString(1));
+            }
+        }
+    }
+}
+```
+
+### DBeaver
+
+DBeaver uses Hive JDBC to connect to the HiveServer2. So DBeaver can connect 
to the Flink SQL Gateway to submit Hive SQL. Considering the
+API compatibility, you can connect to the Flink SQL Gateway like HiveServer2. 
Please refer to the 
[guidance](https://github.com/dbeaver/dbeaver/wiki/Apache-Hive)
+about how to use DBeaver to connect to the Flink SQL Gateway with the 
HiveServer2 Endpoint.
+
+<span class="label label-danger">Attention</span> Currently, HiveServer2 
Endpoint doesn't support authentication. Please use 
+the following JDBC URL to connect to the DBeaver:
+
+```bash
+jdbc:hive2://{host}:{port}/{database};auth=noSasl
+```
+
+After the setup, you can explore Flink with DBeaver.
+
+{{< img width="80%" src="/fig/dbeaver.png" alt="DBeaver" >}}
+
+### Apache Superset
+
+Apache Superset is a powerful data exploration and visualization platform. 
With the API compatibility, you can connect 
+to the Flink SQL Gateway like Hive. Please refer to the 
[guidance](https://superset.apache.org/docs/databases/hive) for more details.
+
+{{< img width="80%" src="/fig/apache_superset.png" alt="Apache Superset" >}}
+
+<span class="label label-danger">Attention</span> Currently, HiveServer2 
Endpoint doesn't support authentication. Please use
+the following JDBC URL to connect to the Apache Superset:
+
+```bash
+hive://hive@{host}:{port}/{database}?auth=NOSASL
+```
+
+Streaming SQL
+----------------
+
+Flink is a batch-streaming unified engine. You can switch to the streaming SQL 
with the following SQL
+
+```bash
+SET table.sql-dialect=default; 
+SET execution.runtime-mode=streaming; 
+SET table.dml-sync=false;
+```
+
+After that, the environment is ready to parse the Flink SQL, optimize with the 
streaming planner and submit the job in async mode.
+
+{{< hint info >}}
+Notice: The `RowKind` in the HiveServer2 API is always `INSERT`. Therefore, 
HiveServer2 Endpoint doesn't support
+to present the CDC data.
+{{< /hint >}}
+
+Supported Types
+----------------
+
+The HiveServer2 Endpoint is built on the Hive2 now and supports all Hive2 
available types. For Hive-compatible tables, the HiveServer2 Endpoint
+obeys the same rule as the HiveCatalog to convert the Flink types to Hive 
Types and serialize them to the thrift object. Please refer to
+the [HiveCatalog]({{< ref 
"docs/connectors/table/hive/hive_catalog#supported-types" >}}) for the type 
mappings.
\ No newline at end of file
diff --git a/docs/content/docs/dev/table/sql-gateway/overview.md 
b/docs/content/docs/dev/table/sql-gateway/overview.md
index 99b312cacc6..4bf66006468 100644
--- a/docs/content/docs/dev/table/sql-gateway/overview.md
+++ b/docs/content/docs/dev/table/sql-gateway/overview.md
@@ -214,7 +214,7 @@ $ ./sql-gateway -Dkey=value
 Supported Endpoints
 ----------------
 
-Flink natively supports [REST Endpoint]({{< ref 
"docs/dev/table/sql-gateway/rest" >}}) and [HiveServer2 Endpoint]({{< ref 
"docs/dev/table/hive-compatibility/hiveserver2" >}}).
+Flink natively supports [REST Endpoint]({{< ref 
"docs/dev/table/sql-gateway/rest" >}}) and [HiveServer2 Endpoint]({{< ref 
"docs/dev/table/sql-gateway/hiveserver2" >}}).
 The SQL Gateway is bundled with the REST Endpoint by default. With the 
flexible architecture, users are able to start the SQL Gateway with the 
specified endpoints by calling 
 
 ```bash


Reply via email to