Weijie Guo created FLINK-32459:
--
Summary: Force set the parallelism of SocketTableSource to 1
Key: FLINK-32459
URL: https://issues.apache.org/jira/browse/FLINK-32459
Project: Flink
Issue Type:
Hi Harish
Jiabao has helped troubleshoot the issue[1] and fixed it very efficiently less
than 24 hours, Thanks Jiabao!
You can built mongodb connector base on latest main branch, or you can wait the
next connector release.
Best,
Leonard
[1]https://issues.apache.org/jira/browse/FLINK-32446
lincoln lee created FLINK-32458:
---
Summary: support mixed use of JSON_OBJECTAGG & JSON_ARRAYAGG with
other aggregate functions
Key: FLINK-32458
URL: https://issues.apache.org/jira/browse/FLINK-32458
lincoln lee created FLINK-32457:
---
Summary: update current documentation of
JSON_OBJECTAGG/JSON_ARRAYAGG to clarify the limitation
Key: FLINK-32457
URL: https://issues.apache.org/jira/browse/FLINK-32457
lincoln lee created FLINK-32456:
---
Summary: JSON_OBJECTAGG & JSON_ARRAYAGG cannot be used with other
aggregate functions
Key: FLINK-32456
URL: https://issues.apache.org/jira/browse/FLINK-32456
Project:
Hi Feng,
Thanks for your input.
>1. we can add a lineage interface like `supportReportLineage`
It's a so good idea and thanks very much. It can help users to report
lineage for existing connectors in DataStream jobs without any additional
operations. I will give this interface in the FLIP later
Tzu-Li (Gordon) Tai created FLINK-32455:
---
Summary: Breaking change in TypeSerializerUpgradeTestBase prevents
flink-connector-kafka from building against 1.18-SNAPSHOT
Key: FLINK-32455
URL:
Bo Cui created FLINK-32454:
--
Summary: deserializeStreamStateHandle of checkpoint read byte
Key: FLINK-32454
URL: https://issues.apache.org/jira/browse/FLINK-32454
Project: Flink
Issue Type: Bug
Tzu-Li (Gordon) Tai created FLINK-32453:
---
Summary: flink-connector-kafka does not build against Flink
1.18-SNAPSHOT
Key: FLINK-32453
URL: https://issues.apache.org/jira/browse/FLINK-32453
Hi there,
My company is in the process of rebuilding some of our batch Spark-based
ETL pipelines in Flink. We use protobuf to define our schemas. One major
challenge is that Flink protobuf deserialization has some semantic
differences with the ScalaPB encoders we use in our Spark systems. This
Hi all,
Thanks for the lively and good discussion. Given the length of the
discussion, I skimmed through and then did a deep dive on the latest state
of the FLIP. I think the FLIP is overall in a good state and ready to bring
to a vote.
One thing that I did notice while skimming through the
Hi all,
I would like to inform you that we have removed the Kafka connector code
from the Flink main repo. This should reduce the developer confusion of
which repo to submit PRs.
Regarding a few nuances, we have kept the Confluent avro format in the main
repo. This is because the format is
Mason Chen created FLINK-32452:
--
Summary: Refactor SQL Client E2E Test to Remove Kafka SQL
Connector Dependency
Key: FLINK-32452
URL: https://issues.apache.org/jira/browse/FLINK-32452
Project: Flink
Mason Chen created FLINK-32451:
--
Summary: Refactor Confluent Schema Registry E2E Tests to remove
Kafka connector dependency
Key: FLINK-32451
URL: https://issues.apache.org/jira/browse/FLINK-32451
Martijn Visser created FLINK-32450:
--
Summary: Update Kafka CI setup to latest version for PRs and
nightly builds
Key: FLINK-32450
URL: https://issues.apache.org/jira/browse/FLINK-32450
Project:
Mason Chen created FLINK-32449:
--
Summary: Refactor state machine examples to remove Kafka dependency
Key: FLINK-32449
URL: https://issues.apache.org/jira/browse/FLINK-32449
Project: Flink
Issue
Martijn Visser created FLINK-32448:
--
Summary: Connector Shared Utils checks out wrong branch when
running CI for PRs
Key: FLINK-32448
URL: https://issues.apache.org/jira/browse/FLINK-32448
Project:
Hi Shammon
Thank you for proposing this FLIP. I think the Flink Job lineage is a very
useful feature.
I have few question:
1. For DataStream Jobs, users need to set up lineage relationships when
building DAGs for their custom sources and sinks.
However, for some common connectors such as Kafka
Hi Juho,
Thank you for bringing this up! Definitely +1 to this. We have had similar
requests for the AsyncSink as well.
As a side note, it would be useful to share the same implementation for both
somehow, to prevent duplicate code.
Happy to help with the implementation here.
For the
lincoln lee created FLINK-32447:
---
Summary: table hints lost when they inside a view referenced by an
external query
Key: FLINK-32447
URL: https://issues.apache.org/jira/browse/FLINK-32447
Project:
Jiabao Sun created FLINK-32446:
--
Summary: MongoWriter should regularly check whether the last write
time is greater than the specified time.
Key: FLINK-32446
URL: https://issues.apache.org/jira/browse/FLINK-32446
+1 (non-binding)
- verified signatures
- compiled from sources
- ran tests locally
- checked release notes
Best,
Yuepeng Pan
At 2023-06-27 07:42:00, "Sergey Nuyanzin" wrote:
>+1 (non-binding)
>
>- verified hashes
>- verified signatures
>- built from sources
>- checked release notes
>- review
Hi Ferenc,
If I understand correctly, there will be two types of jobs in sql-gateway:
`SELECT` and `NON-SELECT` such as `INSERT`.
1. `SELECT` jobs need to collect results from Flink cluster in a
corresponding session of sql gateway, and when the session is closed, the
job should be canceled.
Hi Alex & Gyula,
By compatibility discussion do you mean the "[DISCUSS] FLIP-321: Introduce
> an API deprecation process" thread [1]?
>
Yes, I meant the FLIP-321 discussion. I just noticed I pasted the wrong url
in my previous email. Sorry for the mistake.
I am also curious to know if the
Hi Harish,
Thanks to report this issue. There are currently 5 ways to write:
1. Flush only on checkpoint
'sink.buffer-flush.interval' = '-1' and 'sink.buffer-flush.max-rows' = '-1'
2. Flush for for every single element
'sink.buffer-flush.interval' = '0' or 'sink.buffer-flush.max-rows' = '1'
3.
Hi Jark,
In the current implementation, any job submitted via the SQL Gateway has to be
done through a session, cause all the operations are grouped under sessions.
Starting from there, if I close a session, that will close the
"SessionContext", which closes the "OperationManager" [1], and the
Hi,
I am using the flink version 1.7.1 and flink-mongodb-sql-connector version
1.0.1-1.17.
Below is the data pipeline flow.
Source 1 (Kafka topic using Kafka connector) -> Window Aggregation (legacy
grouped window aggregation) -> Sink (Kafka topic using upsert-kafka connector)
->
Hey!
I share the same concerns mentioned above regarding the "ProcessFunction
API".
I don't think we should create a replacement for the DataStream API unless
we have a very good reason to do so and with a proper discussion about this
as Alex said.
Cheers,
Gyula
On Tue, Jun 27, 2023 at 11:03
Hi Xintong,
By compatibility discussion do you mean the "[DISCUSS] FLIP-321: Introduce
an API deprecation process" thread [1]?
I am also curious to know if the rationale behind this new API has been
previously discussed on the mailing list. Do we have a list of shortcomings
in the current
Thanks Jack, Jingsong, and Zhu for the review!
Thanks Zhu for the suggestion. I have updated the configuration name as
suggested.
On Tue, Jun 27, 2023 at 4:45 PM Zhu Zhu wrote:
> Thanks Dong and Yunfeng for creating this FLIP and driving this discussion.
>
> The new design looks generally good
Hi, jing.
Thanks for pointing it out. Yes, it's a typo. I should be option. Now, I have
updated the FLIP.
Best regards,
Yuxia
- 原始邮件 -
发件人: "Jing Ge"
收件人: "dev"
抄送: "zhangmang1"
发送时间: 星期二, 2023年 6 月 27日 下午 4:26:20
主题: Re: [DISCUSS] FLIP-303: Support REPLACE TABLE AS SELECT statement
Thanks Dong and Yunfeng for creating this FLIP and driving this discussion.
The new design looks generally good to me. Increasing the checkpoint
interval when the job is processing backlogs is easier for users to
understand and can help in more scenarios.
I have one comment about the new
Hi Yuxia,
Thanks for the proposal. Many engines like Snowflake, Databricks support
it. +1
"3:Check the atomicity is enabled, it requires both the options
table.rtas-ctas.atomicity-enabled is set to true and the corresponding
table sink implementation SupportsStaging."
Typo? "Option" instead of
Matthias Pohl created FLINK-32445:
-
Summary: BlobStore.closeAndCleanupAllData doesn't do any close
action
Key: FLINK-32445
URL: https://issues.apache.org/jira/browse/FLINK-32445
Project: Flink
Looks good to me!
Thanks Dong, Yunfeng and all for your discussion and design.
Best,
Jingsong
On Tue, Jun 27, 2023 at 3:35 PM Jark Wu wrote:
>
> Thank you Dong for driving this FLIP.
>
> The new design looks good to me!
>
> Best,
> Jark
>
> > 2023年6月27日 14:38,Dong Lin 写道:
> >
> > Thank you
Thank you Dong for driving this FLIP.
The new design looks good to me!
Best,
Jark
> 2023年6月27日 14:38,Dong Lin 写道:
>
> Thank you Leonard for the review!
>
> Hi Piotr, do you have any comments on the latest proposal?
>
> I am wondering if it is OK to start the voting thread this week.
>
>
+1 (binding)
- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
- Verified web PRs
On Tue, Jun 27, 2023 at 1:42 AM Sergey Nuyanzin wrote:
> +1 (non-binding)
>
> - verified hashes
> - verified
Thank you Leonard for the review!
Hi Piotr, do you have any comments on the latest proposal?
I am wondering if it is OK to start the voting thread this week.
On Mon, Jun 26, 2023 at 4:10 PM Leonard Xu wrote:
> Thanks Dong for driving this FLIP forward!
>
> Introducing `backlog status`
Hi, all.
Thanks for the feedback.
If there are no other questions or concerns for the FLIP[1], I'd like to start
the vote tomorrow (6.28).
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-303%3A+Support+REPLACE+TABLE+AS+SELECT+statement
Best regards,
Yuxia
发件人: "zhangmang1"
Jark Wu created FLINK-32444:
---
Summary: Enable object reuse for Flink SQL jobs by default
Key: FLINK-32444
URL: https://issues.apache.org/jira/browse/FLINK-32444
Project: Flink
Issue Type: New
40 matches
Mail list logo