zentol commented on a change in pull request #18812:
URL: https://github.com/apache/flink/pull/18812#discussion_r812084889



##########
File path: docs/content/docs/dev/configuration/overview.md
##########
@@ -177,19 +177,46 @@ bash -c "$(curl 
https://flink.apache.org/q/gradle-quickstart.sh)" -- {{< version
 
 ## Which dependencies do you need?
 
-Depending on what you want to achieve, you are going to choose a combination 
of our available APIs, 
-which will require different dependencies. 
+To start working on a Flink job, you usually need the following dependencies:
+
+* Flink APIs, in order to develop your job
+* [Connectors and formats]({{< ref "docs/dev/configuration/connector" >}}), in 
order to integrate your job with external systems
+* [Testing utilities]({{< ref "docs/dev/configuration/testing" >}}), in order 
to test your job
+
+And in addition to these, you might want to add 3rd party dependencies that 
you need to develop custom functions.
+
+### Flink APIs
+
+Flink offers two major APIs: [Datastream API]({{< ref 
"docs/dev/datastream/overview" >}}) and [Table API & SQL]({{< ref 
"docs/dev/table/overview" >}}). 
+They can be used separately, or they can be mixed, depending on your use cases:
+
+| APIs you want to use                                                         
     | Dependency you need to add                          |
+|-----------------------------------------------------------------------------------|-----------------------------------------------------|
+| [DataStream]({{< ref "docs/dev/datastream/overview" >}})                     
     | `flink-streaming-java`                              |  
+| [DataStream with Scala]({{< ref "docs/dev/datastream/scala_api_extensions" 
>}})   | `flink-streaming-scala{{< scala_version >}}`        |   
+| [Table API]({{< ref "docs/dev/table/common" >}})                             
     | `flink-table-api-java`                              |   
+| [Table API with Scala]({{< ref "docs/dev/table/common" >}})                  
     | `flink-table-api-scala{{< scala_version >}}`        |
+| [Table API + DataStream]({{< ref "docs/dev/table/data_stream_api" >}})       
     | `flink-table-api-java-bridge`                       |
+| [Table API + DataStream with Scala]({{< ref "docs/dev/table/data_stream_api" 
>}}) | `flink-table-api-scala-bridge{{< scala_version >}}` |
+
+Just include them in your build tool script/descriptor, and you can start 
developing your job!
+
+## Running and packaging
+
+If you want to run your job by simply executing the main class, you will need 
`flink-runtime` in your classpath.
+In case of Table API programs, you will also need `flink-table-runtime` and 
`flink-table-planner-loader`.
 
-Here is a table of artifact/dependency names:
+As a rule of thumb, we **suggest** packaging the application code and all its 
required dependencies into one fat/uber JAR.

Review comment:
       We always expect users to create a far jar, irrespective of deployment 
mode. You want it in session mode for maximum isolation between jobs, and in 
application/per-job mode Flink takes care of adding the user-jar to lib, and 
users shouldn't really be doing that themselves (there's no benefit and it just 
results in inconsistent practices).

##########
File path: docs/content/docs/dev/configuration/overview.md
##########
@@ -177,19 +177,46 @@ bash -c "$(curl 
https://flink.apache.org/q/gradle-quickstart.sh)" -- {{< version
 
 ## Which dependencies do you need?
 
-Depending on what you want to achieve, you are going to choose a combination 
of our available APIs, 
-which will require different dependencies. 
+To start working on a Flink job, you usually need the following dependencies:
+
+* Flink APIs, in order to develop your job
+* [Connectors and formats]({{< ref "docs/dev/configuration/connector" >}}), in 
order to integrate your job with external systems
+* [Testing utilities]({{< ref "docs/dev/configuration/testing" >}}), in order 
to test your job
+
+And in addition to these, you might want to add 3rd party dependencies that 
you need to develop custom functions.
+
+### Flink APIs
+
+Flink offers two major APIs: [Datastream API]({{< ref 
"docs/dev/datastream/overview" >}}) and [Table API & SQL]({{< ref 
"docs/dev/table/overview" >}}). 
+They can be used separately, or they can be mixed, depending on your use cases:
+
+| APIs you want to use                                                         
     | Dependency you need to add                          |
+|-----------------------------------------------------------------------------------|-----------------------------------------------------|
+| [DataStream]({{< ref "docs/dev/datastream/overview" >}})                     
     | `flink-streaming-java`                              |  
+| [DataStream with Scala]({{< ref "docs/dev/datastream/scala_api_extensions" 
>}})   | `flink-streaming-scala{{< scala_version >}}`        |   
+| [Table API]({{< ref "docs/dev/table/common" >}})                             
     | `flink-table-api-java`                              |   
+| [Table API with Scala]({{< ref "docs/dev/table/common" >}})                  
     | `flink-table-api-scala{{< scala_version >}}`        |
+| [Table API + DataStream]({{< ref "docs/dev/table/data_stream_api" >}})       
     | `flink-table-api-java-bridge`                       |
+| [Table API + DataStream with Scala]({{< ref "docs/dev/table/data_stream_api" 
>}}) | `flink-table-api-scala-bridge{{< scala_version >}}` |
+
+Just include them in your build tool script/descriptor, and you can start 
developing your job!
+
+## Running and packaging
+
+If you want to run your job by simply executing the main class, you will need 
`flink-runtime` in your classpath.
+In case of Table API programs, you will also need `flink-table-runtime` and 
`flink-table-planner-loader`.
 
-Here is a table of artifact/dependency names:
+As a rule of thumb, we **suggest** packaging the application code and all its 
required dependencies into one fat/uber JAR.

Review comment:
       We always expect users to create a fat jar, irrespective of deployment 
mode. You want it in session mode for maximum isolation between jobs, and in 
application/per-job mode Flink takes care of adding the user-jar to lib, and 
users shouldn't really be doing that themselves (there's no benefit and it just 
results in inconsistent practices).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to