Hello,
Check if your topic replication factor is not below min.isr setting of
Kafka. I had the same problem and that was it for me.
Frank
Op za 9 apr. 2022 04:01 schreef Praneeth Ramesh :
> Hi All
>
> I have a job which reads from kafka and applies some transactions and
> writes the data back t
Hi Roman,
Here's an example of a WindowReaderFunction:
public class StateReaderFunction extends WindowReaderFunction {
private static final ListStateDescriptor LSD = new
ListStateDescriptor<>(
"descriptorId",
Integer.class
);
@Override
I will dig deeper into Statefun. Also, yes for now I also can try the
Spring/Kafka solution if Statefun doesn't fit.
Austin - as far rewriting our microservices in Flink here are some things I
was looking for:
- We need to be able to easily share/transform data with other teams.
Flink SQL seems
Hi Alexis,
If I understand correctly, the provided StateProcessor program gives
you the number of stream elements per operator. However, you mentioned
that these operators have collection-type states (ListState and
MapState). That means that per one entry there can be an arbitrary
number of state
I'd try to increase the value, so that the timeout doesn't happen
during the shutdown.
Regards,
Roman
On Fri, Apr 8, 2022 at 7:50 PM Alexey Trenikhun wrote:
>
> Hi Roman,
> Currently rest.async.store-duration is not set. Are you suggesting to try to
> decrease value from default or vice-versa?
It seems to be possible now with RequestReplyHandlers from Java SDK
[1] (or other SDKs) unless I'm missing something.
[1]
https://nightlies.apache.org/flink/flink-statefun-docs-release-3.2/docs/sdk/java/#serving-functions
Regards,
Roman
On Fri, Apr 8, 2022 at 7:45 PM Austin Cawley-Edwards
wrote
Hi Carlos,
AFAIK, Flink FileSource is capable of checkpointing while reading the
files (at least in Streaming Mode).
As for the watermarks, I think FLIP-182 [1] could solve the problem;
however, it's currently under development.
I'm also pulling in Arvid and Fabian who are more familiar with the
Hi Roman,
Currently rest.async.store-duration is not set. Are you suggesting to try to
decrease value from default or vice-versa?
Thanks,
Alexey
From: Roman Khachatryan
Sent: Friday, April 8, 2022 5:32:45 AM
To: Alexey Trenikhun
Cc: Flink User Mail List
Subject
Good suggestion – though a common misconception with Statefun is that HTTP
ingestion is possible. Last time I checked it was still under theoretical
discussion. Do you know the current state there?
Austin
On Fri, Apr 8, 2022 at 1:19 PM Roman Khachatryan wrote:
> Hi,
>
> Besides the solution su
Hi,
Besides the solution suggested by Austing, you might also want to look
at Stateful Functions [1]. They provide a more convenient programming
model for the use-case I think, while DataStream is a relatively
low-level API.
[1]
https://nightlies.apache.org/flink/flink-statefun-docs-stable/
Rega
Hi Jason,
No, there is no HTTP source/ sink support that I know of for Flink. Would
running the Spring + Kafka solution in front of Flink work for you?
On a higher level, what drew you to migrating the microservice to Flink?
Best,
Austin
On Fri, Apr 8, 2022 at 12:35 PM Jason Thomas
wrote:
> I
I'm taking an existing REST based microservice application and moving all
of the logic into Flink DataStreams.
Is there an easy way to get a request/response from a Flink DataStream so I
can 'call' into it from a REST service? For example, something similar to
this Kafka streams example that use
Hello,
I have a streaming job running on Flink 1.14.4 that uses managed state with
RocksDB with incremental checkpoints as backend. I've been monitoring a dev
environment that has been running for the last week and I noticed that state
size and end-to-end duration have been increasing steadily.
Hi Alexis,
RocksDB itself supports manual compaction API [1], and current Flink does not
support to call these APIs to support periodic compactions.
If Flink supports such period compaction, from my understanding, this is
somehow like major compaction in HBase. I am not sure whether this is rea
Hello,
Unfortunately, it's difficult to name the exact reason why the timeout
happens because there's no message logged.
I've opened a ticket to improve the logging [1].
There, I also listed some code paths that might lead to this situation.
>From the described scenario, I'd suppose that it's
Com
May I ask if anyone tested RocksDB’s periodic compaction in the meantime? And
if yes, if it helped with this case.
Regards,
Alexis.
From: tao xiao
Sent: Samstag, 18. September 2021 05:01
To: David Morávek
Cc: Yun Tang ; user
Subject: Re: RocksDB state not cleaned up
Thanks for the feedback!
Hi Jin,
Flink is an open source project, so the community works on best-effort.
There's no guaranteed/quick support available. There are companies that
provide commercial support if needed.
Best regards,
Martijn Visser
https://twitter.com/MartijnVisser82
https://github.com/MartijnVisser
On Fri
confirmed that moving back to FlinkKafkaConsumer fixes things.
is there some notification channel/medium that highlights critical
bugs/issues on the intended features like this pretty readily?
On Fri, Apr 8, 2022 at 2:18 AM Jin Yi wrote:
> based on symptoms/observations on the first operator (L
Hi Roman,
Thanks for the quick response. It wasn't that, but your comment about erasure
made me realize I should have debugged the code and looked at the types.
Apparently setting TTL changes the serializer, so I also had to add TTL in the
WindowReaderFunction.
Regards,
Alexis.
-Original
Hi Alexis,
I think your setup is fine, but probably Java type erasure makes Flink
consider the two serializers as different.
Could you try creating a MapStateDescriptor by explicitly providing
serializers (constructed manually)?
Regards,
Roman
On Fri, Apr 8, 2022 at 10:01 AM Alexis Sarda-Espino
based on symptoms/observations on the first operator (LogRequestFilter)
watermark and event timestamps, it does seem like it's the bug. things
track fine (timestamp > watermark) for the first batch of events, then the
event timestamps go back into the past and are "late".
looks like the 1.14 back
Hi Jin,
If you are using new FLIP-27 sources like KafkaSource, per-partition watermark
(or per-split watermark) is a default feature integrated in SourceOperator. You
might hit the bug described in FLINK-26018 [1], which happens during the first
fetch of the source that records in the first spl
Hi everyone,
I have a ProcessWindowFunction that uses Global window state. It uses MapState
with a descriptor defined like this:
MapStateDescriptor> msd = new MapStateDescriptor<>(
"descriptorName",
TypeInformation.of(Long.class),
TypeInformation.of(new TypeHint>() {})
);
25 matches
Mail list logo