Hi,
Reactive mode and the Autoscaler in Kubernetes operator are two different
approaches towards elastic scaling of Flink.
Reactive mode [1] has to be used together with the passive resource management
approach of Flink (only Standalone mode takes this approach), where the TM
number is
Hi,
I have a question related to this.
I am doing a POC with Kubernetes operator 1.8 and flink 1.18 version with
Reactive mode enabled, I added some dummy slow and fast operator to the
flink job and i can see there is a back pressure accumulated. but i am not
sure why my Flink task managers are
The Apache Flink community is very happy to announce the release of Apache
flink-connector-kafka 3.2.0 for Flink 1.18 and 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is
The Apache Flink community is very happy to announce the release of Apache
flink-connector-jdbc 3.2.0 for Flink 1.18 and 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is
Hi all,
We have just upgraded to Flink 1.19 and we are experiencing some issue when
the job tries to restore from *some* savepoints, not all. In these cases,
the failure manifests when the job is unable to create a checkpoint after
starting from savepoint, saying *Failure reason: Not all required
Hi Yanfei,
I'm using Flink SQL API, however isn't it like SQL is translated to
DataStream's building blocks, so SQL window is in fact eg.
SlicingWindowOperator?
I see only such rocks dbs under a TM' /tmp/flink-io-* directory:
bash-4.4$ du -d 1 . -b 1
94262
Apologies, this was RC2, not RC1.
On Fri, Jun 7, 2024 at 11:12 AM Danny Cranmer
wrote:
> I'm happy to announce that we have unanimously approved this release.
>
> There are 7 approving votes, 3 of which are binding:
> * Ahmed Hamdy
> * Hang Ruan
> * Leonard Xu (binding)
> * Yuepeng Pan
> *
I'm happy to announce that we have unanimously approved this release.
There are 7 approving votes, 3 of which are binding:
* Ahmed Hamdy
* Hang Ruan
* Leonard Xu (binding)
* Yuepeng Pan
* Zhongqiang Gong
* Rui Fan (binding)
* Weijie Guo (binding)
There was one -1 vote that was cancelled.
*
The Apache Flink community is very happy to announce the release of Apache
flink-connector-cassandra 3.2.0 for Flink 1.18 and 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The
Hi Everyone,
We are using the TableStream API of Flink v1.14.3 with HA Kubernetes
enabled, along with Flink K8s Operator v1.6. One of the jobs, which had
been running stably for a long time, started restarting frequently. Upon
closer inspection, we observed that the container memory usage
The Apache Flink community is very happy to announce the release of Apache
flink-connector-aws 4.3.0 for Flink 1.18 and 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is
The Apache Flink community is very happy to announce the release of Apache
flink-connector-gcp-pubsub 3.1.0 for Flink 1.18 and 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The
Hi Alexandre,
According to this StackOverFlow conversation[1], seems both AsyncFunction and
FlatMapFunction requires the output object could fit into memory, which seems
not feasible in the case you mentioned.
Maybe it could be done by creating a customized Flink Source with FLIP-27 new
API?
Hi,
I am designing a Flink pipeline to process a stream of images (rasters to be
more accurate which are quite heavy: up to dozen GB). To distribute the process
of one image, we split it into tiles to which we apply the processing that
don't require the whole image before reassembling it.
Hi Salva,
Just wondering why not good to set the uid like this?
```
output.sinkTo(outputSink).uid("my-human-readable-sink-uid");
```
>From the mentioned UID Flink is going to make the hash which is consistent
from UID -> HASH transformation perspective.
BR,
G
On Fri, Jun 7, 2024 at 7:54 AM
Hi!
To simplify things you can generally look at TRUE_PROCESSING_RATE,
SCALUE_UP_RATE_THRESHOLD and SCALE_DOWN_RATE_THRESHOLD.
If TPR is below the scale up threshold then we should scale up and if its
above the scale down threshold then we scale down.
In your case what we see for your source
16 matches
Mail list logo