Hi,
If you are using HashMapStateBackend, there may be some noticeable overhead.
If RocksDBStateBackend, I think the overhead may be minor.
As we know, Flink will write the key group as the prefix of the key to
speed up rescaling.
So the format will be like: key group | key len | key | ..
You
In theory, the serializer should be incompatible if you don't change
anything.
So Could you share more so that we could find the root cause:
1. the exception stack where we could see the detailed error serializer
2. flink conf where we could see some potential problematic parameters
On Mon, Jul 10
Hi Anastasios,
I think you may need to implement a customized trigger to emit record when
a session window is created. You can refer to [1] for more detailed
information.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/windows/#triggers
Best,
Shammon FY
On
Hello Team,
Recently, we observed that Flink logs were missing while writing to files in
order to forward them to Splunk to see event metrics, even though the Flink UI
showed them accurate.Can you please help me understand what might be causing it?
Regards,Madan
Hi Sanket,
Yes, that's correct.
Thanks,
Martijn
On Fri, Jul 7, 2023 at 8:00 PM Sanket Agrawal
wrote:
> Hello Martijn,
>
> Thank you for your reply. Even for the newer versions of Flink it’s
> recommended to use MailboxExecutor in place of StreamTask’s
> getCheckpointLock() method, right?
>
>
Hi Jannik,
Can you still share what are the values you're setting for your properties?
>From the top of my head, you need to set:
value.avro-confluent.properties.auto.register.schemas=false
value.avro-confluent.properties.use.latest.version=true
Best regards,
Martijn
On Tue, Jul 11, 2023 at 4:
How to set the max parallelism , if I set it to 200 or 2000 ,the performance
will be the same?
Do you know how they are overriding the method? Are they building their own
distribution of Flink with their own implementation of that method? I'd like to
avoid having to build that myself. I'd be interested in a solution in the
official release.
Von: Meissner, Dylan
Gesendet: Freitag, 30. Ju
Hi Jane,
Thanks for your response. Yes it did throw a parsing error (Apache calcite-
flink internally uses it I guess).
Since, I am creating this flink table by consuming a Kafka topic, I don't
have the ability to change the avro schema , maybe I can check the
possibility of introducing a new fie
Hi amenreet,
I think there are two ways to clean up state data in the flink job
automatically:
1. State TTL. You can configure the ttl [1] for state according to your
requirements, and flink job will clean the data when they are out of date.
For flink SQL jobs you can set a global ttl for all ope
Hi Elakiya,
Did you encounter a ParserException when executing the DDL? AFAIK, Flink
SQL does not support declaring a nested column (compound identifier) as
primary key at syntax level.
A possible workaround is to change the schema to not contain record type,
then you can change the DDL to the fo
Hi Team,
I wanted to confirm, the local state which TM stores in the directory
either we provide through config or default i.e. /tmp folder, does it clear
itself from time to time or the size just keeps on increasing?
Thanks
Regards
Amenreet Singh Sodhi
12 matches
Mail list logo