Hello,
Searching some expertise on exception handling with checkpointing and
streaming. Let’s say some bad data flows into your Flink application and
causes an exception you are not expecting. That exception will bubble up,
ending up in killing the respective task and the app will not be able
Hi, so I have a dump file. What do I look for?
On Thu, Mar 31, 2022 at 3:28 PM John Smith wrote:
> Ok so if there's a leak, if I manually stop the job and restart it from
> the UI multiple times, I won't see the issue because because the classes
> are unloaded correctly?
>
>
> On Thu, Mar 31, 20
Hi Marjan,
The method `collect` is used to collect the content of a table. However, as
`insert_statement` is a `INSERT INTO` statement and so there is no table to
collect from in your example. You could try the following code:
```
sql_statement = """
SELECT window_start, window_end, COUNT(
Hi Harshit,
I think you could double check whether the version of
flink-sql-connector-kafka.jar
is also 1.14.4.
Regards,
Dian
On Thu, Apr 14, 2022 at 7:51 PM harshit.varsh...@iktara.ai <
harshit.varsh...@iktara.ai> wrote:
>
>
>
>
> *From:* harshit.varsh...@iktara.ai [mailto:harshit.varsh...@ikt
From: harshit.varsh...@iktara.ai [mailto:harshit.varsh...@iktara.ai]
Sent: Thursday, April 14, 2022 4:04 PM
To: user-i...@flink.apache.org
Cc: harshit.varsh...@iktara.ai
Subject: Pyflink Kafka consumer error (version 1.14.4)
Dear Team,
I am new to pyflink and request for your support
Hello,
I have a simple source table (which is using kafka connector) that's
reading and storing data from specific kafka topic. I also have print
table:
> t_env.execute_sql("""
> CREATE TABLE print (
> window_start TIMESTAMP(3),
> window_end TIMESTAMP(3),
>
Hi Qinghui,
If you're using SQL, please be aware that there are unfortunately no
application state compatibility guarantees. You can read more about this in
the documentation [1]. This is why the community has been working on
FLIP-190 to support this in future versions [2]
Best regards,
Martijn
Hello,
There was a network issue in my environment and the job had to restart. After
the job came back up, the logs showed a lot of lines like this:
RocksDBIncrementalRestoreOperation ... Starting to restore from state handle:
...
Interestingly, those entries include information about sizes in
Hello Yu'an,
Thanks for the reply.
I'm using the SQL api, not using the `DataStream` API in the job. So
there's no `keyby` call directly in our code, but we do have some `group
by` and joins in the SQL. (We are using deprecated table planners both
before and after migration)
Do you know what could
Hi Anitha,
As far as I can tell the problem is with avro itself. We upgraded avro
version we use underneath in Flink 1.12.0. In 1.11.x we used avro 1.8.2,
while starting from 1.12.x we use avro 1.10.0. Maybe that's the problem.
You could try to upgrading the avro version in your program. Just
10 matches
Mail list logo