Привет Иван,
The source of your problem is quite easy:
- If you do windowing by event time, all the sources need to emit watermarks.
- watermarks are the logical clock used when event-time timing
- you could use either processing time windows, or adjust watermark strategy of
your sources
Hey Иван,
Use *TumblingProcessingTimeWindows* instead of TumblingEventTimeWindows.
TumblingEventTimeWindows requires a watermark strategy.
*Ref*
https://stackoverflow.com/questions/72291659/flink-tumbling-window-is-not-triggered-no-watermark-strategy
*Regards*
Shrihari
On Fri, Jun 30, 2023 at
Hello,
plz help me, I can't join two streams. In the joined stream I've got
zero messages and can't understand why?
Kafka Topics:
1st stream
topic1: {'data': {'temp':25.2, 'sensore_name': 'T1', 'timestamp':
123123131}, 'compare_with': 'T2'}
2nd stream
topic2: {'data': {'temp':28, 'sensore_name':
请教下,如果用flink进行批计算,使用DataStream API有没有什么优化的地方,是否可以直接将数据作为矩阵在算子之间传递进行计算
> ... a state backward in (processing) time ...
(of course not processing, I meant to say event time)
On 6/29/23 14:45, Jan Lukavský wrote:
Hi Kamal,
you probably have several options:
a) bundle your private key and certificate into your Flink
application's jar (not recommended, your
Hi Kamal,
you probably have several options:
a) bundle your private key and certificate into your Flink
application's jar (not recommended, your service's private key will have
to be not exactly "private")
b) create a service which will provide certificate for your service
during runtime
BTW, it seems I spoke too soon in my previous email. I left the job running
overnight with each source having its own alignment group to evaluate only
per-split alignment, and I can see that eventually some partitions never
resumed consumption and the consumer lag increased.
Regards,
Alexis.
Am
Hello Community,
I have created TCP stream custom source and reading data from TCP stream source.
But this TCP connection needs to be secured i.e. SSL based, query is how to
configure/provide certificates via Flink for Client-Server secured TCP
connection?
Rgds,
Kamal
Hi Mahmoud,
While it's not an answer to your questions, I do want to point out
that the DataSet API is deprecated and will be removed in a future
version of Flink. I would recommend moving to either the Table API or
the DataStream API.
Best regards,
Martijn
On Thu, Jun 22, 2023 at 6:14 PM
The Apache Flink community is very happy to announce the release of
Apache flink-connector-jdbc v3.1.1. This version is compatible with
Flink 1.16 and Flink 1.17.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
Hi Martjin,
thanks for the pointers. I think the issue I'm seeing is not caused by
those because in my case the watermarks are not negative. Some more
information from my setup in case it's relevant:
- All Kafka topics have 6 partitions.
- Job parallelism is 2, but 2 of the Kafka sources are
Hi Dani,
There are two things that I notice:
1. You're mixing different Flink versions (1.16 and 1.17): all Flink
artifacts should be from the same Flink version
2. S3 plugins need to be added to the plugins folder of Flink, because they
are loaded via the plugin mechanism. See
Hi Alexis,
There are a couple of recent Flink tickets on watermark alignment,
specifically https://issues.apache.org/jira/browse/FLINK-32414 and
https://issues.apache.org/jira/browse/FLINK-32420 - Could the later be also
applicable in your case?
Best regards,
Martijn
On Wed, Jun 28, 2023 at
Hi Vladislav,
I think it might be worthwhile to upgrade to Flink 1.17, given the
improvements that have been made in Flink 1.16 and 1.17 on batch
processing. See for example the release notes of 1.17, with an entire
section on batch processing
Thanks for reaching out Stephen. I've also updated the Slack invite link at
https://flink.apache.org/community/#slack
Best regards, Martijn
On Thu, Jun 29, 2023 at 3:20 AM yuxia wrote:
> Hi, Stephen.
> Welcome to join Flink Slack channel. Here's my invitation link:
>
>
Hi Mike,
Let me sketch it:
* The trick I use (no idea if it is wise or not ) is to have
nginx-ingress set up and then specify a service selecting the nginx…controller
pods [1]
* You don’t need to bind to the node address (see externalIPs), you could
much the same port-forward this
16 matches
Mail list logo