[ 
https://issues.apache.org/jira/browse/FLINK-6073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943164#comment-15943164
 ] 

Fabian Hueske commented on FLINK-6073:
--------------------------------------

Thanks for this proposal. 
I think we discussed this use case on the [mailing list 
before|https://lists.apache.org/thread.html/7ec475aad09f10488d091857abd3c9fbcbc2a4127cd0f88dabb47595@%3Cdev.flink.apache.org%3E].

One of the main goals of Flink's Table API and SQL on streams are unified 
semantics for queries on streams and batch tables.
This is very important in order to run the same query on streams and historic 
data (archived streams).

So far, the APIs comply with this requirement. Streams are logically converted 
into tables by appending events to a conceptual table.
Running a query on such an append table must return the same result regardless 
whether the it is executed in a streaming (the append table is continuously 
updated) or in a batch (the append table is already fixed) fashion.

This is not the case for the semantics that you propose in this JIRA. 

Let's take your example of two streams. If we turn Stream1 and Stream2 into two 
tables T1 and T2 by appending all records we would get:

T1
|| time || exchange ||
| T1 | 1.2 ||
| T4 | 1.3 ||

T2
|| time || user || amount || 
| T2 | User1 | 10 |
| T3 | User2 | 11 |
| T5 | User3 | 9 |

Let's see the results if we execute the proposed queries on these tables in a 
batch fashion:

- Q1 ({{SELECT amount, (SELECT exchange FROM T1) AS field1 FROM T2}}) would 
fail because T1 contains more than a single row.
- Q2 ({{SELECT amount, (SELECT exchange FROM T1 ORDER BY time LIMIT 1) AS 
field1 FROM T2}}) would return the following result 

|| amount || exchange ||
| 10 | 1.3 |
| 11 | 1.3 |
| 9 | 1.3 |

This is different from the result that you want to compute because the result 
of the inner query is 1.3 and there is no time-based predicate between T1 and 
T2.

The batch query that would produce the correct result would look like this 
(given that there are no two records with the same time in T1):

{code}
SELECT amount, exchange 
FROM T1, T2
WHERE T1.time​ ​=​ ​( 
​ ​​ ​SELECT​ ​MAX(t1_2.time) 
​ ​​ ​FROM​ ​T1 ​AS​ t1_2
​ ​​ ​AND​ ​t1_2.time  ​<=​ t2.time)
{code}

Due to the unified stream batch semantics, this should also be the query that 
produces the correct result on a stream.

While I agree, that unified semantics for batch and stream processing is more 
meaningful for event-time processing, I am convinced that this does not mean 
that processing time queries should have different semantics than event-time 
queries.
IMO, the semantics of processing time and event time queries should be as close 
as possible. Also the question of retraction is independent of event-time and 
processing-time. Event-time might need more retractions due to out-of-order or 
late data, but processing time operators need to support retraction as well. 
Consider for example emitting early results for a 1-hour window, computing 
aggregates with out window cause, or filtering with a time-based predicate. 
Similarly, joins as the one above would need to retract the 1.2 exchange rate 
and replace it by 1.3 (in fact, this would require to buffer the complete T2 
input stream, such a query would not be possible to execute).
        
Best, Fabian

> Support for SQL inner queries for proctime
> ------------------------------------------
>
>                 Key: FLINK-6073
>                 URL: https://issues.apache.org/jira/browse/FLINK-6073
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table API & SQL
>            Reporter: radu
>            Assignee: radu
>            Priority: Critical
>              Labels: features
>         Attachments: innerquery.png
>
>
> Time target: Proc Time
> **SQL targeted query examples:**
>  
> Q1) `Select  item, (select item2 from stream2 ) as itemExtern from stream1;`
> Comments: This is the main functionality targeted by this JIRA to enable to 
> combine in the main query results from an inner query.
> Q2) `Select  s1.item, (Select a2 from table as t2 where table.id = s1.id  
> limit 1) from s1;`
> Comments:
> Another equivalent way to write the first example of inner query is with 
> limit 1. This ensures the equivalency with the SingleElementAggregation used 
> when translated the main target syntax for inner query. We must ensure that 
> the 2 syntaxes are supported and implemented with the same functionality. 
> There is the option also to select elements in the inner query from a table 
> not just from a different stream. This should be a sub-JIRA issue implement 
> this support.
> **Description:**
> Parsing the SQL inner query via calcite is translated to a join function 
> (left join with always true condition) between the output of the query on the 
> main stream and the output of a single output aggregation operation on the 
> inner query. The translation logic is shown below
> ```
> LogicalJoin [condition=true;type=LEFT]
>       LogicalSingleValue[type=aggregation]
>               …logic of inner query (LogicalProject, LogicalScan…)
>       …logical of main,external query (LogicalProject, LogicalScan…))
> ```
> `LogicalJoin[condition=true;type=LEFT] `– it can be considered as a special 
> case operation rather than a proper join to be implemented between 
> stream-to-stream. The implementation behavior should attach to the main 
> stream output a value from a different query. 
> `LogicalSingleValue[type=aggregation]` – it can be interpreted as the holder 
> of the single value that results from the inner query. As this operator is 
> the guarantee that the inner query will bring to the join no more than one 
> value, there are several options on how to consider it’s functionality in the 
> streaming context:
> 1.    Throw an error if the inner query returns more than one result. This 
> would be a typical behavior in the case of standard SQL over DB. However, it 
> is very unlikely that a stream would only emit a single value. Therefore, 
> such a behavior would be very limited for streams in the inner query. 
> However, such a behavior might be more useful and common if the inner query 
> is over a table. 
> 1.    We can interpret the usage of this parameter as the guarantee that at 
> one moment only one value is selected. Therefore the behavior would rather be 
> as a filter to select one value. This brings the option that the output of 
> this operator evolves in time with the second stream that drives the inner 
> query. The decision on when to evolve the stream should depend on what marks 
> the evolution of the stream (processing time, watermarks/event time, 
> ingestion time, window time partitions…).
>  In this JIRA issue the evolution would be marked by the processing time. For 
> this implementation the operator would work based on option 2. Hence at every 
> moment the state of the operator that holds one value can evolve with the 
> last elements. In this way the logic of the inner query is to select always 
> the last element (fields, or other query related transformations based on the 
> last value). This behavior is needed in many scenarios: (e.g., the typical 
> problem of computing the total income, when incomes are in multiple 
> currencies and the total needs to be computed in one currency by using always 
> the last exchange rate).
> This behavior is motivated also by the functionality of the 3rd SQL query 
> example – Q3  (using inner query as the input source for FROM ). In such 
> scenarios, the selection in the main query would need to be done based on 
> latest elements. Therefore with such a behavior the 2 types of queries (Q1 
> and Q3) would provide the same, intuitive result.
> **Functionality example**
> Based on the logical translation plan, we exemplify next the behavior of the 
> inner query applied on 2 streams that operate on processing time.
> SELECT amount, (SELECT exchange FROM inputstream1) AS field1 FROM inputstream2
>  ||Time||Stream1||Stream2||Output||
> |T1|      |   1.2|             | 
> |T2|User1,10|    |     (10,1.2)|
> |T3|User2,11|            |     (11,1.2)|
> |T4|          |   1.3|             |     
> |T5|User3,9 |    |      (9,1.3)|
> |...|
> Note 1. For streams that would operate on event time, at moment T3 we would 
> need to retract the previous outputs ((10, 1.2), (11,1.2) ) and reemit them 
> as ((10,1.3), (11,1.3) ). 
> Note 2. Rather than failing when a new value comes in the inner query we just 
> update the state that holds the single value. If option 1 for the behavior of 
> LogicalSingleValue is chosen, than an error should be triggered at moment T3.
> **Implementation option**
> Considering the notes and the option for the behavior the operator would be 
> implemented by using the join function of flink  with a custom always true 
> join condition and an inner selection for the output based on the incoming 
> direction (to mimic the left join). The single value selection can be 
> implemented over a statefull flat map. In case the join is executed in 
> parallel by multiple operators, than we either use a parallelism of 1 for the 
> statefull flatmap (option 1) or we broadcast the outputs of the flatmap to 
> all join instances to ensure consistency of the results (option 2). 
> Considering that the flatMap functionality of selecting one value is light, 
> option 1 is better.  The design schema is shown below.
> !innerquery.png!
> **General logic of Join**
> ```
> leftDataStream.join(rightDataStream)
>                  .where(new ConstantConditionSelector())
>                  .equalTo(new ConstantConditionSelector())
>                 .window(window.create())
>                 .trigger(new LeftFireTrigger())
>                 .evictor(new Evictor())
>                .apply(JoinFunction());
> ```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to