Flink未来会在ad-hoc方向投入吗?类似Flink自带Trino/Presto的性能优化方式,这样批、流、OLAP/ad-hoc只需要一个引擎就可以。
Yes, restarting the app with a clean state does seem to fix the issue, but
I think I may have found a bug in Flink.
Here's how we can replicate it:
- Create a simple application with KeyedProcessFunction (with onTimer())
- Send a few records with the same key. In processElement(), register a
Hi,
When trying to set up a demo for the kafka-sql-client reading an Avro topic
from Kafka I run into problems with regards to the additional dependencies.
In the spark-shell there is a --packages option which automatically
resolves any additional required jars (transitively) using the provided
Dear Flink community,
On the 30th of March we will host a meetup on the upcoming Flink 1.15 release.
Get all the information here [1].
There will also be an AMA with Matthias and Chesnay. If you already got a
question
on your mind, let me know.
You might want to have a look at the release wiki
Is anyone able to comment on the below? My worry is this class isn’t well
support so I may need to find an alternative to bulk copy data into SQL Server
e.g. use a simple file sink and then have some process bulk copy the files.
From: Sandys-Lumsdaine, James
Hi, we are running flink 1.13.2 version on Kinesis Analytics. Our source is
a kafka topic with one partition so far and we are using the
FlinkKafkaConsumer (kafka-connector-1.13.2)
Sometimes we get some errors from the consumer like the below:
HI Guowei,
Will check the doc out. Thanks for your help.
Best regards,
Chen-Che
On Mon, Mar 21, 2022 at 4:05 PM Guowei Ma wrote:
> Hi, Huang
> From the document[1] it seems that you need to close the iterate stream.
> such as `iteration.closeWith(feedback);`
> BTW You also could get a
Hi Georg,
I just noticed that the replies between Jeff and me didn't go through the
mailing list. For reference, Jeff moved it to
https://github.com/zjffdu/flink-scala-shell
Best regards,
Martijn
On Tue, 22 Mar 2022 at 18:24, Georg Heiler
wrote:
> Many thanks.
>
> In the linked discussion it
Version: 1.13
mode:batch & FlinkSQL
这是什么原因导致的
Error:
```
java.lang.RuntimeException: Too more data in this partition: 2147483648
at
org.apache.flink.table.runtime.hashtable.BinaryHashPartition.insertIntoBuildBuffer(BinaryHashPartition.java:264)
at
Hi Ian,
Unfortunately configuring the naming is only possible when using the
FileSystem connector from DataStream. If this would be an option for you
the configuration is explained here:
After fixing your negative timestamp bug, can the timer be triggered?
> On 23 Mar 2022, at 2:39 AM, Binil Benjamin wrote:
>
> Here are some more findings as I was debugging this. I peeked into the
> snapshot to see the current values in "_timer_state/processing_user-timers"
> and here is
使用 Postgres 数据库作为 Catalog
时如何设置一些其他参数,例如sink.buffer-flush.interval,sink.buffer-flush.max-rows
17610801...@163.com
13 matches
Mail list logo