[ 
https://issues.apache.org/jira/browse/FLINK-35369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Lee updated FLINK-35369:
------------------------------
    Description: 
Flink has rich and varied SQL offerings/deployment mode, it can take some time 
for new users to investigate and arrive at the right offering for them. 
Consider the available options:

1. Flink SQL Client (through SQL gateway, embedded or remote)
2. REST through SQL Gateway
3. A SQL client with Flink JDBC driver (through SQL gateway's REST interface)
4. A SQL client with Hive JDBC driver (through SQL gateway's HiveServer2 
interface)
5. Compile and submit through Flink Client (Java/Scala/Python)
6. Submitting packaged archive with code that uses Table API to JobManager REST 
endpoint 

(Additionally, Apache Zeppelin also provide notebook experience with its Flink 
SQL interpreter which builds upon Flink Client.)

The improvement being suggested here is to either enrich existing [Table API 
and SQL overview 
page|https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/table/overview/]
 or create new page that contains the following information:

1. Diagram on the various options available (see diagram below)
2. Table explaining pros of each approach e.g. Flink SQL Client for initial 
experimentation, development, OLAP. Implementing on top of Flink SQL JDBC 
client or SQL Gateway REST for automation, HiveServer2 for inter-operabilty 
with Hive etc. The table will guide users to the corresponding page for each 
option.  !LandscapeOfFlinkSQL.drawio(6).png!

  was:
Flink has rich and varied SQL offerings/deployment mode, it can take some time 
for new users to investigate and arrive at the right offering for them. 
Consider the available options:

1. Flink SQL Client (through SQL gateway, embedded or remote)
2. REST through SQL Gateway
3. A SQL client with Flink JDBC driver (through SQL gateway's REST interface)
4. A SQL client with Hive JDBC driver (through SQL gateway's HiveServer2 
interface)
5. Flink Client submitting packaged application (Java/Scala/Python)
6. Submitting packaged archive with code that uses Table API to JobManager REST 
endpoint 

(Additionally, Apache Zeppelin also provide notebook experience with its Flink 
SQL interpreter which builds upon Flink Client.)

The improvement being suggested here is to either enrich existing [Table API 
and SQL overview 
page|https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/table/overview/]
 or create new page that contains the following information:

1. Diagram on the various options available (see diagram below)
2. Table explaining pros of each approach e.g. Flink SQL Client for initial 
experimentation, development, OLAP. Implementing on top of Flink SQL JDBC 
client or SQL Gateway REST for automation, HiveServer2 for inter-operabilty 
with Hive etc. The table will guide users to the corresponding page for each 
option.  !LandscapeOfFlinkSQL.drawio(6).png!


> Improve `Table API and SQL` page or add new page to guide new users to right 
> Flink SQL option
> ---------------------------------------------------------------------------------------------
>
>                 Key: FLINK-35369
>                 URL: https://issues.apache.org/jira/browse/FLINK-35369
>             Project: Flink
>          Issue Type: Improvement
>          Components: Project Website
>    Affects Versions: 1.19.0
>            Reporter: Keith Lee
>            Priority: Major
>         Attachments: LandscapeOfFlinkSQL.drawio(6).png
>
>
> Flink has rich and varied SQL offerings/deployment mode, it can take some 
> time for new users to investigate and arrive at the right offering for them. 
> Consider the available options:
> 1. Flink SQL Client (through SQL gateway, embedded or remote)
> 2. REST through SQL Gateway
> 3. A SQL client with Flink JDBC driver (through SQL gateway's REST interface)
> 4. A SQL client with Hive JDBC driver (through SQL gateway's HiveServer2 
> interface)
> 5. Compile and submit through Flink Client (Java/Scala/Python)
> 6. Submitting packaged archive with code that uses Table API to JobManager 
> REST endpoint 
> (Additionally, Apache Zeppelin also provide notebook experience with its 
> Flink SQL interpreter which builds upon Flink Client.)
> The improvement being suggested here is to either enrich existing [Table API 
> and SQL overview 
> page|https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/table/overview/]
>  or create new page that contains the following information:
> 1. Diagram on the various options available (see diagram below)
> 2. Table explaining pros of each approach e.g. Flink SQL Client for initial 
> experimentation, development, OLAP. Implementing on top of Flink SQL JDBC 
> client or SQL Gateway REST for automation, HiveServer2 for inter-operabilty 
> with Hive etc. The table will guide users to the corresponding page for each 
> option.  !LandscapeOfFlinkSQL.drawio(6).png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to