Hello,
Do flink 1.7.1 supports connection to relational database(mysql)?
I want to use mysql as my streaming source to read some configuration..
Thanks,
Manju
Thanks Chesnay for raising this discussion thread. I think there are 3
major use scenarios for flink binary distribution.
1. Use it to set up standalone cluster
2. Use it to experience features of flink, such as via scala-shell,
sql-client
3. Downstream project use it to integrate with their syst
Hi Gagan,
> But I also have a requirement for event time based sliding window
aggregation
Yes, you can achieve this with Flink TableAPI/SQL. However, currently,
sliding windows don't support early fire, i.e., only output results when
event time reaches the end of the window. Once window fires, th
Hey Aaron,
sorry for the late reply.
(1) I think I was able to reproduce this issue using snappy-java. I've
filed a ticket here:
https://issues.apache.org/jira/browse/FLINK-11402. Can you check the
ticket description whether it's in line with what you are
experiencing? Most importantly, do you se
Thank you guys. It's great to hear multiple solutions to achieve this. I
understand that records once emitted to Kafka can not be deleted and that's
acceptable for our use case as last updated value should always be correct.
However as I understand most of these solutions will work for global
aggre
In flink you cant read data from kafka in Dataset API (Batch)
And you dont want to mess with start and stop your job every few hours.
Can you elaborate more on your use case ,
Are you going to use KeyBy , is thire any way to use trigger ... ?
On Mon, Jan 21, 2019 at 4:43 PM Jonny Graham
wrote:
@Jeff: It depends if user can define a time window for his condition. As Gagan
described his problem it was about “global” threshold of pending orders.
I have just thought about another solution that should work without any custom
code. Converting “status” field to status_value int:
- "+1” for
Hi all,
I'm new to Flink so am probably missing something simple. I'm using
Flink 1.7.1 and am trying to use temporal table functions but aren't
getting the results I expect. With the example code below, I would
expect 4 records to be output (one for each order), but instead I'm only
seeing a
Hello all,
I have a question, please !
I'm using Flink 1.6 to process our data in streaming mode.
I wonder if at a given event, there is a way to get the current total number of
processed events (before this event).
If possible, I want to get this total number of processed events as a value
stat
We have a Kafka stream of events that we want to process with a Flink
datastream process. However, the stream is populated by an upstream batch
process that only executes every few hours. So the stream has very 'bursty'
behaviour. We need a window based on event time to await the next events for
I am thinking of another approach instead of retract stream. Is it possible
to define a custom window to do this ? This window is defined for each
order. And then you just need to analyze the events in this window.
Piotr Nowojski 于2019年1月21日周一 下午8:44写道:
> Hi,
>
> There is a missing feature in Fl
Hi Kostas,
I have a similar scenario where i have to clear window elements upon
reaching some count or clear the elements if they are older than one hour.
I'm using the below approach, just wanted to know if its the right way :
DataStream> out = mappedFields
.map(new CustomMapFunction())
Hi,
There is a missing feature in Flink Table API/SQL of supporting retraction
streams as the input (or conversions from append stream to retraction stream)
at the moment. With that your problem would simplify to one simple `SELECT uid,
count(*) FROM Changelog GROUP BY uid`. There is an ongoing
Hi Gagan,
Yes, you can achieve this with Flink TableAPI/SQL. However, you have to pay
attention to the following things:
1) Currently, Flink only ingests append streams. In order to ingest upsert
streams(steam with keys), you can use groupBy with a user-defined
LAST_VALUE aggregate function. For i
Hi,
You have to use `open()` method to handle initialisation of the things required
by your code/operators. By the nature of the LocalEnvironment, the life cycle
of the operators is different there compared to what happens when submitting a
job to the real cluster. With remote environments your
15 matches
Mail list logo