Re: Flink-JDBC JDBCUpsertTableSink keyFields Problem

2019-11-14 Thread 张万新
A typical use case that will genreate updates (meaning not append only) is a non-widown groupy-by aggregation, like "select user, count(url) from clicks group by user". You can refer to the flink doc at https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/streaming/dynamic_tables.

Re: Document an example pattern that makes sources and sinks pluggable in the production code for testing

2019-11-14 Thread Piotr Nowojski
Hi, Doesn’t the included example `ExampleIntegrationTest` demonstrate the idea of > inject special test sources and test sinks in your tests. ? Piotrek > On 11 Nov 2019, at 13:44, Hung wrote: > > Hi guys, > > I found the testing part mentioned > > make sources and sinks pluggable in your

RemoveVertex is not working in flink Gelly.

2019-11-14 Thread RAMALINGESWARA RAO THOTTEMPUDI
Respected Sir, This is my code of flink gelly: for(Vertex vertex : node_set) { System.out.println(vertex); graph_copy.removeVertevertex); temp.add(vertex); } For some reason "removeVertex" is not working even on giving correct parameters. Kindly resolve the issue. Given a list of edges, remo

Re: Propagating event time field from nested query

2019-11-14 Thread Dawid Wysakowicz
Hi Piyush, Could you verify that the type of the `timestamp` field in the table my_kafka_stream is of TIMESTAMP(3) *ROWTIME* type? Could you share how you create this table? What you are doing should work and what I suspect is that the `timestamp` field in the `my_kafka_stream` changed the type s

Re: Initialization of broadcast state before processing main stream

2019-11-14 Thread Maxim Parkachov
Hi Vasily, unfortunately, this is known issue with Flink, you could read discussion under https://cwiki.apache.org/confluence/display/FLINK/FLIP-17+Side+Inputs+for+DataStream+API . At the moment I have seen 3 solutions for this issue: 1. You buffer fact stream in local state before broadcast is

Re: Propagating event time field from nested query

2019-11-14 Thread Piyush Narang
Hi Dawid, Thanks for getting back, the *ROWTIME* modifier did ring a bell and I was able to find the issue. We are registering the inner table correctly (timestamp is of type timestamp(3) rowtime), but we had an intermediate step where we converted that to a Datastream to optionally add custom

Re: Flink-JDBC JDBCUpsertTableSink keyFields Problem

2019-11-14 Thread Jingsong Li
Hi Polarisary: Maybe I see what you mean. You want to use the upsert mode for an append stream without keyFields. In fact, both isAppend and keyFields are set automatically by the planner framework. You can't control them. So yes, it is related to sql, only upsert stream can be inserted into sink