Hi Henry,

such a feature is currently under discussion [1] feel free to participate here and give feedback. So far you need to have some intermediate store usually this could be Kafka or a filesystem.

I would recommend to write little unit tests that test each SQL step like it is done here [2].

I hope this helps.

Regards,
Timo

[1] http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Support-Interactive-Programming-in-Flink-Table-API-tt25372.html#a25666 [2] https://github.com/apache/flink/blob/master/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/stream/sql/SqlITCase.scala


Am 07.01.19 um 16:19 schrieb 徐涛:
Hi Expert,
        Usually when we write Flink-SQL program, usually we need to use 
multiple tables to get the final result, this is due to sometimes it is not 
possible to implement complicated logic in one SQL, sometimes due to the 
clarity of logic. For example:
        create view A as
        select * from source where xxx;

        create view B as
        select * from A where xxx;

        create view C as
        select * from B where xxx;

        insert into sink
        select * from C where xxx;

        But when we write complicated logic, we may accomplish it step by step, 
make sure that the first step is correct, then go on with the next step. In 
batch program such as Hive or Spark, we usually write SQL like this, step by 
step.

        For example:
        create view A as
        select * from source where xxx;
        I want to check if the content in A is correct, if it is correct I go 
on to write another SQL. But I do not want to define a sink for each step, 
because it is not worthy just create a sink for such a “debug” step.
        So is there a solution or best practice for such a scenario? How do we 
easily debug or verify the correctness  of a Flink SQL program?

Best
Henry


Reply via email to