SbloodyS commented on code in PR #9840:
URL: https://github.com/apache/dolphinscheduler/pull/9840#discussion_r861448043


##########
dolphinscheduler-task-plugin/dolphinscheduler-task-flink/src/main/java/org/apache/dolphinscheduler/plugin/task/flink/FlinkConstants.java:
##########
@@ -24,19 +24,48 @@ private FlinkConstants() {
     }
 
     /**
-     * flink
+     * flink command
+     * usage: flink run [OPTIONS] <jar-file> <arguments>
+     */
+    public static final String FLINK_COMMAND = "flink";
+    public static final String FLINK_RUN = "run";
+
+    /**
+     * flink sql command
+     * usage: sql-client.sh -i <initialization file>, -j <JAR file>, -f 
<script file>...
+     */
+    public static final String FLINK_SQL_COMMAND = "sql-client.sh";

Review Comment:
   > 【-i】: Script file that used to init the session context. If get error in 
execution, the sql client will exit.Notice it's not allowed to add query or 
insert into the init file.
   > 
   > The initialization script specified by -i is used by most users to specify 
some parameters like set a=b. Of course, these parameters can also be placed in 
the SQL script specified by -f. So our users also want to set the parameters of 
flink themselves in the front-end editor. Of course, we also do built-in 
processing for the flink parameters filled in by the user front-end in the 
FlinkTask class. These parameters are as follows:
   > 
   > ```sql
   > set sql-client.execution.result-mode=tableau;
   > set execution.target=yarn-per-job;
   > set yarn.application.name=streaming-top-speed-windowing-sql;
   > set yarn.application.queue=default;
   > set jobmanager.memory.process.size=1024mb;
   > set taskmanager.memory.process.size=4096mb;
   > set taskmanager.numberOfTaskSlots=2;
   > set parallelism.default=4;
   > set execution.runtime-mode=streaming;
   > ```
   > 
   > At the same time -i can be specified directly under the control of the 
front-end option parameter ~
   
   I do not think so. I think the initialization script specified by ```-i``` 
is used by most users to using different connectors. Users will execute 
```DDL``` or ```USE CATALOG``` scripts in order to create a ```flink-cdc``` or 
```kafka``` table and use it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to