Thanks for you explain.
It seems that what you actually need is a general connector parameter in 'WITH'
block like 'filter-pushdown-enabled', and the specific connector rejects all
filters that could otherwise be pushed down in the connector.
Please forgive me if I'm wrong.
--
Best!
+1 (non-binding)
- verified signatures
- verified hash
- built from source code succeeded
Best,
Jiabao
Thanks Xuyang,
The table.optimizer.source.predicate-pushdown-enabled options do not provide
fine-grained configuration for each source.
Suppose we have an SQL query with two sources: Kafka and a database (CDC).
The database is sensitive to pressure, and we want to configure it to not
perform f
Hi, the existant configuration
'table.optimizer.source.predicate-pushdown-enabled' seems to do what you want.
Can you describe more clearly the difference between what you want and this
configuration ?
--
Best!
Xuyang
At 2023-10-24 14:12:14, "Jiabao Sun" wrote:
>Hi Devs,
>
Hi Devs,
I would like to start a discussion on support configuration to disable filter
pushdown for Table/SQL Sources[1].
Currently, Flink SQL does not support the ability for users to enable or
disable filter pushdown.
However, filter pushdown has some side effects, such as additional
comput
+1(non-binding)
- Downloaded artifacts from dist[1]
- Verified SHA512 checksums
- Verified GPG signatures
- Build the source with java 8 and 11
[1] https://dist.apache.org/repos/dist/dev/flink/flink-1.18.0-rc3/
Bests,
Samrat
On Tue, Oct 24, 2023 at 10:44 AM Jingsong Li wrote:
> +1 (binding)
>
+1 (binding)
- verified signatures & hash
- built from source code succeeded
- started SQL Client, used Paimon connector to write and read, the
result is expected
Best,
Jingsong
On Tue, Oct 24, 2023 at 12:15 PM Yuxin Tan wrote:
>
> +1(non-binding)
>
> - Verified checksum
> - Build from source c
+1(non-binding)
- Verified checksum
- Build from source code
- Verified signature
- Started a local cluster and run Streaming & Batch wordcount job, the
result is expected
- Verified web PR
Best,
Yuxin
Qingsheng Ren 于2023年10月24日周二 11:19写道:
> +1 (binding)
>
> - Verified checksums and signature
Jiabao Sun created FLINK-33344:
--
Summary: Replace Time with Duration in RpcInputSplitProvider
Key: FLINK-33344
URL: https://issues.apache.org/jira/browse/FLINK-33344
Project: Flink
Issue Type: S
+1 (binding)
- Verified checksums and signatures
- Built from source with Java 8
- Started a standalone cluster and submitted a Flink SQL job that read and
wrote with Kafka connector and CSV / JSON format
- Reviewed web PR and release note
Best,
Qingsheng
On Mon, Oct 23, 2023 at 10:40 PM Leonard
+1 to reopen the FLIP, the FLIP has been stalled for more than a year due to
the author's time slot.
Glad to see the developers from IBM would like to take over the FLIP, we can
continue the discussion in FLIP-233 discussion thread [1]
Best,
Leonard
[1] https://lists.apache.org/thread/cd60ln4
Sorry for the delay. I filed
https://issues.apache.org/jira/projects/FLINK/issues/FLINK-33343 to track
and address the proposal here.
Regards
Venkata krishnan
On Tue, Oct 17, 2023 at 7:49 PM Venkatakrishnan Sowrirajan
wrote:
> Thanks Martijn, David, Ryan and others for contributing to this gre
Venkata krishnan Sowrirajan created FLINK-33343:
---
Summary: Close stale Flink PRs
Key: FLINK-33343
URL: https://issues.apache.org/jira/browse/FLINK-33343
Project: Flink
Issue Typ
Hi David,
Just to follow-up on that last question: I can confirm that there are no
regressions for the Flink Kafka connector working with Flink 1.18. The
previous nightly build failures were caused by breaking changes in test
code, which has been resolved by now.
I'll be creating new releases for
Hi,
I notice
https://cwiki.apache.org/confluence/display/FLINK/FLIP-233%3A+Introduce+HTTP+Connector
has been abandoned , due to lack of capacity. I work for IBM and my team is
interested in helping to get this connector contributed into Flink. Can we open
this Flip again and we can look to get
+1 (non binding)
- Verified checksum and signatures,
- checked Helm repo
- Installed operator,
- tested woed count and state machine example
Bests,
Samrat
On Mon, 23 Oct 2023 at 9:35 PM, Mate Czagany wrote:
> +1 (non-binding)
>
> - Verified checksums, signatures, no binary found in source
>
Hi Martijn,
Thanks for the pointer; that makes sense – many (most?) projects only provide
fixes at the current release (apart for exception circumstances – possibly some
high priority security fixes) ; I am curious why Flink fixes 2 streams of code.
One thing that I wondered about is me is the u
+1 (non-binding)
- Verified checksums, signatures, no binary found in source
- Verified Helm chart and Docker images
- Tested autoscaler on 1.18 with reactive scaling
Regards,
Mate
Gyula Fóra ezt írta (időpont: 2023. okt. 23., H,
9:45):
> +1 (binding)
>
> - Verified checksums, signatures, sour
Hi David,
The change that caused the conflict in your PR is caused by FLINK-33291
[1]. I was thinking about adding links to the comments to make the
navigation to the corresponding resources easier as you rightfully
mentioned. I didn't do it in the end because I was afraid that
documentation might
Hi David,
The policy is that the current and and previous minor release are
supported, and it's documented at
https://flink.apache.org/downloads/#update-policy-for-old-releases
One of the reasons for decoupling the connectors from Flink is that it
could be possible to support older versions of Fli
+1 (binding)
- verified signatures
- verified hashsums
- built from source code succeeded
- checked all dependency artifacts are 1.18
- started SQL Client, used MySQL CDC connector to read changelog from database
, the result is expected
- reviewed the web PR, left minor comments
- reviewed the
Hi,
I am relatively new to the Flink community. I notice that critical fixes are
backported to previous versions. Do we have a documented backport strategy and
set of principles?
The reason I ask is that we recently moved removed the Kafka connector from the
core repository, so the Kafka connec
Hi Marton and Martijn,
I have removed the link to the legacy Paimon (flink-table-store) but only left
a link to the incubating-paimon doc. Please move to the PR review[1] for quick
discussions.
[1] https://github.com/apache/flink-web/pull/665
Best
Yun Tang
From
Matthias Pohl created FLINK-33342:
-
Summary: JDK 17 CI run doesn't set java17-target profile
Key: FLINK-33342
URL: https://issues.apache.org/jira/browse/FLINK-33342
Project: Flink
Issue Type:
Stefan Richter created FLINK-33341:
--
Summary: Use available local state for rescaling
Key: FLINK-33341
URL: https://issues.apache.org/jira/browse/FLINK-33341
Project: Flink
Issue Type: Impro
(under "Prepare for the release")
As for CI:
https://github.com/apache/flink/blob/78b5ddb11dfd2a3a00b58079fe9ee29a80555988/tools/ci/maven-utils.sh#L84
https://github.com/apache/flink/blob/9b63099964b36ad9d78649bb6f5b39473e0031bd/tools/azure-pipelines/build-apache-repo.yml#L39
https://github.com/ap
I am a bit confused by the split in the CompletedJobStore / JobDetailsStore.
Seems like JobDetailsStore is simply a view on top of CompletedJobStore:
- Maybe we should not call it a store? Is it storing anything?
- Why couldn't the cleanup triggering be the responsibility of the
CompletedJobStore
Hi David,
Please check [1] in the section Verify Java and Maven Version. Thanks!
Best regards,
Jing
[1]
https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release
On Mon, Oct 23, 2023 at 1:25 PM David Radley
wrote:
> Hi,
>
> I have an open pr in the backlog that improves the
Hi,
I have an open pr in the backlog that improves the pom.xml by introducing some
Maven variables. The pr is https://github.com/apache/flink/pull/23469
It has been reviewed but not merged. In the meantime another pom change has
been added that caused a conflict. I have amended the code in my pr
Sergey Nuyanzin created FLINK-33340:
---
Summary: Bump Jackson to 2.15.3
Key: FLINK-33340
URL: https://issues.apache.org/jira/browse/FLINK-33340
Project: Flink
Issue Type: Technical Debt
FLINK-17375 [1] removed [2] run-pre-commit-tests.sh in Flink 1.12. Since
then the following tests are not executed anymore:
test_state_migration.sh
test_state_evolution.sh
test_streaming_kinesis.sh
test_streaming_classloader.sh
test_streaming_distributed_cache_via_blob.sh
Certain classes that were
Hi ,
I want to apply for FLIP Wiki Edit Permission, I am working on [FLINK-33267]
https://issues.apache.org/jira/browse/FLINK-33267 , and I would like to create
a FLIP for it.
My Jira ID is Dan Zou (zou...@apache.org).
Best,
Dan Zou
Martijn Visser created FLINK-9:
--
Summary: Update Guava to 32.1.3
Key: FLINK-9
URL: https://issues.apache.org/jira/browse/FLINK-9
Project: Flink
Issue Type: Technical Debt
Hi all,
Thanks for your responses.
@Jingsong Li: Thanks for the reference to the web PR, I missed that.
@Yun Tang: Thanks, I prefer simply removing the TableStore link from the
documentation navigation of Flink, as it is not a subproject of Flink
anymore - it is now its own project. It has had 2
Piotr Nowojski created FLINK-8:
--
Summary: Bump up RocksDB version to 7.x
Key: FLINK-8
URL: https://issues.apache.org/jira/browse/FLINK-8
Project: Flink
Issue Type: Sub-task
Piotr Nowojski created FLINK-7:
--
Summary: Expose IngestDB and ClipDB in the official RocksDB API
Key: FLINK-7
URL: https://issues.apache.org/jira/browse/FLINK-7
Project: Flink
Is
Thanks for driving that.
+1 (non-binding)
Regards,
Xiangyu
Yu Chen 于2023年10月23日周一 15:19写道:
> +1 (non-binding)
>
> We deeply need this capability to balance Tasks at the Taskmanager level in
> production, which helps to make a more sufficient usage of Taskmanager
> resources. Thanks for driving
Martijn Visser created FLINK-6:
--
Summary: Upgrade ASM to 9.6
Key: FLINK-6
URL: https://issues.apache.org/jira/browse/FLINK-6
Project: Flink
Issue Type: Technical Debt
C
+1 (binding)
- Verified checksums, signatures, source release content
- Helm repo works correctly and points to the correct image / version
- Installed operator, ran stateful example
Gyula
On Sat, Oct 21, 2023 at 1:43 PM Rui Fan <1996fan...@gmail.com> wrote:
> +1(non-binding)
>
> - Downloaded a
+1 (non-binding)
We deeply need this capability to balance Tasks at the Taskmanager level in
production, which helps to make a more sufficient usage of Taskmanager
resources. Thanks for driving that.
Best,
Yu Chen
Yangze Guo 于2023年10月23日周一 15:08写道:
> +1 (binding)
>
> Best,
> Yangze Guo
>
> On
+1 (binding)
Best,
Yangze Guo
On Mon, Oct 23, 2023 at 12:00 PM Rui Fan <1996fan...@gmail.com> wrote:
>
> +1(binding)
>
> Thanks to Yuepeng and to everyone who participated in the discussion!
>
> Best,
> Rui
>
> On Mon, Oct 23, 2023 at 11:55 AM Roc Marshal wrote:
>>
>> Hi all,
>>
>> Thanks for al
41 matches
Mail list logo