Hi!
I think we have to make sure that the Rescale API will work also without
the adaptive scheduler (for instance when we are running Flink with the
Kubernetes Native Integration or in other cases where the adaptive
scheduler is not supported).
What do you think?
Cheers
Gyula
On Fri, Oct 7,
Thanks!.
On Sun, Oct 9, 2022, 08:45 Qingsheng Ren wrote:
> Hi Sriram,
>
> A short answer: the interval of polling is adjusted “dynamically” (by
> blocking the KafkaConsumer#poll call) according to the traffic of data.
>
> I think this line [1] is what you are looking for.
>
> Basically
Hi Sriram,
A short answer: the interval of polling is adjusted “dynamically” (by blocking
the KafkaConsumer#poll call) according to the traffic of data.
I think this line [1] is what you are looking for.
Basically KafkaSource fires KafkaPartitionSplitReader.fetch calls repeatedly in
a loop,
zhangjingcun created FLINK-29552:
Summary: Repair the instructions for DAYOFYEAR, DAYOFMONTH, and
DAYOFFEEK functions
Key: FLINK-29552
URL: https://issues.apache.org/jira/browse/FLINK-29552
Project:
Hi Dingsheng,
Welcome to the Flink community! Acutally you don’t need to have any kind of
permission to make contribution to the Flink project. Please go ahead and
create / pick up any JIRA ticket you’d like to work on, then submit your PR on
Github.
Here’s a documentation about some
Hi devs and users,
I’d like to start a discussion about reverting a breaking change about sink
metrics made in 1.15 by FLINK-26126 [1] and FLINK-26492 [2].
TL;DR
All sink metrics with name “numXXXOut” defined in FLIP-33 are replace by
“numXXXSend” in FLINK-26126 and FLINK-26492. Considering
dalongliu created FLINK-29551:
-
Summary: Improving adaptive hash join by using sort merge join
strategy per partition instead of all partitions
Key: FLINK-29551
URL: https://issues.apache.org/jira/browse/FLINK-29551
+1 (non-binding)
* Hashes and Signatures look good
* All required files on dist.apache.org
* Tag is present in Github
* Verified source archive does not contain any binary files
* Source archive builds using maven
* Deployed standalone session cluster and ran TopSpeedWindowing example in
roa created FLINK-29550:
---
Summary: example "basic-checkpoint-ha.yaml" not working
Key: FLINK-29550
URL: https://issues.apache.org/jira/browse/FLINK-29550
Project: Flink
Issue Type: Bug
Samrat Deb created FLINK-29549:
--
Summary: Flink sql to add support of using AWS glue as metastore
Key: FLINK-29549
URL: https://issues.apache.org/jira/browse/FLINK-29549
Project: Flink
Issue
RocMarshal created FLINK-29548:
--
Summary: Remove deprecated class files of the 'flink-test-utils'
module.
Key: FLINK-29548
URL: https://issues.apache.org/jira/browse/FLINK-29548
Project: Flink
dalongliu created FLINK-29547:
-
Summary: Select a[1] which is array type for parquet complex type
throw ClassCastException
Key: FLINK-29547
URL: https://issues.apache.org/jira/browse/FLINK-29547
Hui Wang created FLINK-29546:
Summary: UDF:Failed to compile split code, falling back to
original code
Key: FLINK-29546
URL: https://issues.apache.org/jira/browse/FLINK-29546
Project: Flink
xiaogang zhou created FLINK-29545:
-
Summary: kafka consuming stop when trigger first checkpoint
Key: FLINK-29545
URL: https://issues.apache.org/jira/browse/FLINK-29545
Project: Flink
Issue
14 matches
Mail list logo