Hi Paul,
I think this FLIP has already in a good shape. I just left some additional
thoughts:
*1) the display of savepoint_path*
Could the displayed savepoint_path include the scheme part?
E.g. `hdfs:///flink-savepoints/savepoint-cca7bc-bb1e257f0dab`
IIUC, the scheme part is omitted when it's a
Hi Xuyang,
Thanks for starting this discussion. Join Hint is a long-time requested
feature.
I have briefly gone through the design doc. Join Hint is a public API for
SQL syntax.
It should work for both streaming and batch SQL. I understand some special
hints
may only work for batch SQL. Could you
Hi Martijn,
Regarding maintaining Gateway inside or outside Flink code base,
I would like to share my thoughts:
> I would like to understand why it's complicated to make the upgrades
problematic. Is it because of relying on internal interfaces? If so, should
we not consider making them public?
It's great to see the active discussion! I want to share my ideas:
1) implement the cache in framework vs. connectors base
I don't have a strong opinion on this. Both ways should work (e.g., cache
pruning, compatibility).
The framework way can provide more concise interfaces.
The connector base
Hi Martijn,
Thanks for bringing up this discussion.
>From my point of view, Flink Bylaws should also be applied to the
connectors.
I don't think connectors are just implementations, they provide many APIs
for
end-users, including DataStream API, and SQL DDL options, SQL metadata
columns.
There
I'm also fine with the end of July for the feature freeze.
Best,
Jark
On Thu, 28 Apr 2022 at 21:00, Martijn Visser wrote:
> +1 for continuing to strive for a 5 months release cycle.
>
> And +1 to have the planned feature freeze mid August, which I would propose
> to have happen on Monday the
+1 (binding)
Thank Yuxia, for driving this work.
Best,
Jark
On Thu, 28 Apr 2022 at 11:58, Jingsong Li wrote:
> +1 (Binding)
>
> A very good step to move forward.
>
> Best.
> Jingsong
>
> On Wed, Apr 27, 2022 at 9:33 PM yuxia wrote:
> >
> > Hi, everyone
> >
> > Thanks all for attention to
Thank Jingsong for starting this discussion.
I think it's reasonable to add them to public APIs
which can help build connectors easier.
Looking forward to a FLIP to finalize the APIs.
Best,
Jark
On Tue, 26 Apr 2022 at 14:03, Jingsong Li wrote:
> Hi everyone,
>
> The source sink for the
Thank Shengkai for driving this effort,
I think this is an essential addition to Flink Batch.
I have some small suggestions:
1) Kyuubi provides three ways to configure Hive metastore [1]. Could we
provide similar abilities?
Especially with the JDBC Connection URL, users can visit different Hive
ase deals only with SplitIds, whereas SplitReader needs the
> actual splits to pause them. I found the discrepancy acceptable for the
> sake of simplifying changes significantly, especially as they would highly
> likely impact performance as we would have to perform additional
Thanks, Godfrey, for starting this discussion,
I understand the motivation behind it.
No bugfix releases, slow feature reviewing, and no compatibility guaranteed
are genuinely blocking the development of Flink SQL.
I think a fork is the last choice before trying our best to cooperate with
the
Thanks for the effort, Dawid and Sebastian!
I just have some minor questions (maybe I missed something).
1. Will the framework always align with watermarks when the source
implements the interface?
I'm afraid not every case needs watermark alignment even if Kafka
implements the interface,
and
I started a local standalone cluster and run several queries
using SQL CLI. But found some issues [1][2][3]. They are all
related to the correctness of results or backward compatibility.
I didn't set them as block issues, if we have a next RC, we
should get them merged, if not, we can release a
Jark Wu created FLINK-27369:
---
Summary: COALESCE('1', CAST(NULL as varchar)) throws expression
type mismatch
Key: FLINK-27369
URL: https://issues.apache.org/jira/browse/FLINK-27369
Project: Flink
Jark Wu created FLINK-27368:
---
Summary: CAST(' 1 ' as BIGINT) reutrns wrong result
Key: FLINK-27368
URL: https://issues.apache.org/jira/browse/FLINK-27368
Project: Flink
Issue Type: Bug
Jark Wu created FLINK-27367:
---
Summary: CAST between INT and DATE is broken
Key: FLINK-27367
URL: https://issues.apache.org/jira/browse/FLINK-27367
Project: Flink
Issue Type: Bug
Thanks for driving this work @Ron,
+1 (binding)
Best,
Jark
On Thu, 21 Apr 2022 at 10:42, Mang Zhang wrote:
> +1
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
>
> Best regards,
> Mang Zhang
>
>
>
>
>
> At 2022-04-20 18:28:28, "刘大龙" wrote:
> >Hi, everyone
> >
> >
> >
> >
> >I'd like to start a vote on
Hi Martijn,
I have discussed this with Yuxia offline and improved the design again.
*Here are the improvements:*
1) all the public APIs are staying in flink-table-common or
flink-table-api-java modules,
2) rename "flink-table-planner-spi" to "flink-table-calcite-bridge" which
only contains
Thank Ron for updating the FLIP.
I think the updated FLIP has addressed Martijn's concern.
I don't have other feedback. So +1 for a vote.
Best,
Jark
On Fri, 15 Apr 2022 at 16:36, 刘大龙 wrote:
> Hi, Jingsong
>
> Thanks for your feedback, we will use flink FileSytem abstraction, so HDFS
> S3 OSS
Jark Wu created FLINK-27301:
---
Summary: KafkaSourceE2ECase#restartFromSavepoint is failed on Azure
Key: FLINK-27301
URL: https://issues.apache.org/jira/browse/FLINK-27301
Project: Flink
Issue Type
d unbounded jobs, how
> this
> >>> works with the SQL upgrade story. Could you create one?
> >> Sure. I’m preparing one. Please give me the permission if possible.
> >>
> >> My Confluence user name is `paulin3280`, and the full name is `Paul
> Lam`.
> >>
Hi Konstantin,
Thanks for starting this discussion.
>From my perspective, I prefer the "Language Tabs" approach.
But maybe we can improve the tabs to move to the sidebar or top menu,
which allows users to first decide on their language and then the API.
IMO, programming languages are just like
Congrats Yuan! Well deserved!
Best,
Jark
On Tue, 15 Mar 2022 at 17:42, Qingsheng Ren wrote:
> Congratulations Yuan!
>
> Best regards,
>
> Qingsheng Ren
>
> > On Mar 15, 2022, at 15:09, Yuxin Tan wrote:
> >
> > Congratulations, Yuan!
> >
> > Best,
> > Yuxin
> >
> > Yang Wang 于2022年3月15日周二
ket for that.
>>
>> It would be great to get some insights from currently Flink and Hive
>> users which versions are being used.
>> @Jark I would indeed deprecate the old Hive versions in Flink 1.15 and
>> then drop them in Flink 1.16. That would also remove some tech d
I also have some concerns because it's a huge change and the 1.15 will be
released soon.
I remember the last time when we merged Java formatting, and all the
pending bugfix PRs
needed to be refactored. I'm afraid this may delay the 1.15 release. I
guess the scala formatting
is a nice-to-have
Jark Wu created FLINK-26541:
---
Summary: SQL Client should support submitting SQL jobs in
application mode
Key: FLINK-26541
URL: https://issues.apache.org/jira/browse/FLINK-26541
Project: Flink
Hi Martijn,
Thanks for starting this discussion. I think it's great
for the community to to reach a consensus on the roadmap
of Hive query syntax.
I agree that the Hive project is not actively developed nowadays.
However, Hive still occupies the majority of the batch market
and the Hive
Thank you Till for every things. It's great to work with you. Good luck!
Best,
Jark
On Mon, 28 Feb 2022 at 21:26, Márton Balassi
wrote:
> Thank you, Till. Good luck with the next chapter. :-)
>
> On Mon, Feb 28, 2022 at 1:49 PM Flavio Pompermaier
> wrote:
>
> > Good luck for your new
/opt/flink/examples/sql/word_count.sql
>
> Best,
> Yang
>
> Jark Wu 于2022年2月16日周三 20:00写道:
>
> > I think this mode is still limited and maybe not easy to extend.
> > Could the application mode provide an interface to execute?
> > So that clients can impleme
k Zeppelin/SQL CLI could work with such a mode for
> non-interactive queries (interactive queries would use a session cluster)?
>
> Best,
>
> Konstantin
>
>
> On Sat, Feb 12, 2022 at 4:31 AM Jark Wu wrote:
>
> > Hi David,
> >
> > Zeppelin and SQL CLI also s
active clients (eg. Zeppelin that you've mentioned)? Aren't
> these a natural fit for the session cluster?
>
> D.
>
> On Fri, Feb 11, 2022 at 3:25 PM Jark Wu wrote:
>
> > Hi Konstantin,
> >
> > I'm not very familiar with the implementation of per-job mode and
Hi Konstantin,
I'm not very familiar with the implementation of per-job mode and
application mode.
But is there any instruction for users abou how to migrate platforms/jobs
to application mode?
IIUC, the biggest difference between the two modes is where the main()
method is executed.
However, SQL
Jark Wu created FLINK-26077:
---
Summary: Support operators send request to Coordinator and return
a response
Key: FLINK-26077
URL: https://issues.apache.org/jira/browse/FLINK-26077
Project: Flink
+1 (binding)
Thanks Jing for driving this!
Best,
Jark
On Thu, 20 Jan 2022 at 10:22, Jing Zhang wrote:
> Hi community,
>
> I'd like to start a vote on FLIP-204: Introduce Hash Lookup Join [1] which
> has been discussed in the thread [2].
>
> The vote will be open for at least 72 hours unless
+1 to merge the two modules. This can avoid confusion when developers test
connectors.
Please also remember to add a release note that the
"flink-connector-testing" is removed and should use
"flink-connector-test-utils" instead.
Best,
Jark
On Tue, 18 Jan 2022 at 21:28, Leonard Xu wrote:
>
I'm also in favour of "flink-table-store".
Best,
Jark
On Mon, 10 Jan 2022 at 16:18, David Morávek wrote:
> Hi Jingsong,
>
> the connector repository prototype I've seen is being built on top of
> Gradle [1], that's why I was referring to it (I think one idea was also to
> migrate the main
gt; > > >>>
> > > >>> Hi everyone,
> > > >>>
> > > >>> even though the DISCUSS thread was open for 2 weeks. I have the
> > feeling
> > > >>> that the VOTE was initiated to quickly. At least a final &q
?
> >> > > >> >> It could be turned off for the beginning.
> >> > > >> >> To make it supported across different dialects it is required
> to
> >> > have
> >> > > >> such
> >> > > >> >&
+1 (binding)
Btw, @JingZhang I think your vote can be counted into binding now.
Best,
Jark
On Tue, 23 Nov 2021 at 20:19, Jing Zhang wrote:
> +1 (non-binding)
>
> Best,
> Jing Zhang
>
> Martijn Visser 于2021年11月23日周二 下午7:42写道:
>
> > +1 (non-binding)
> >
> > On Tue, 23 Nov 2021 at 12:13, Aitozi
Hi Flavio,
CDC connector's documentation is up-to-date. In fact, currently,
all released CDC connector versions are not compatible with Flink 1.14.
That's also mentioned by Thomas "the fact that X.Y.0 releases
tend to break downstream in one way or the other due to unexpected
upstream changes."
Congratulations Yangze!
Best,
Jark
On Fri, 12 Nov 2021 at 14:06, JING ZHANG wrote:
> Congrats!
>
> Best,
> Jing Zhang
>
> Lijie Wang 于2021年11月12日周五 下午1:25写道:
>
> > Congrats Yangze!
> >
> > Best,
> > Lijie
> >
> > Yuan Mei 于2021年11月12日周五 下午1:10写道:
> >
> > > Congrats Yangze!
> > >
> > > Best
>
+1 (binding)
Thanks for the great work Jingsong!
Best,
Jark
On Thu, 11 Nov 2021 at 19:41, JING ZHANG wrote:
> +1 (non-binding)
>
> A small suggestion:
> The message queue is currently used to store middle layer data of the
> streaming data warehouse. We hope use built-in dynamic table storage
committer!
Cheers,
Jark Wu
[1]: https://github.com/ververica/flink-cdc-connectors
+1 to LTS. This is good news for users to upgrade their version.
We should also publish the version support roadmap on public wiki, just
like how Java does [1].
The roadmap should include the LTS support end time, also include the
future planning LTS versions.
Best,
Jark
[1]:
+1 for this. It looks much more clear and structured.
Best,
Jark
On Thu, 11 Nov 2021 at 17:23, Chesnay Schepler wrote:
> I'm generally in favor of it, and there are already tickets that
> proposed a dedicated operator/vertex description:
>
> https://issues.apache.org/jira/browse/FLINK-20388
>
+1 (binding)
Best,
Jark
On Mon, 8 Nov 2021 at 23:57, Till Rohrmann wrote:
> +1 (binding)
>
> Cheers,
> Till
>
> On Fri, Nov 5, 2021 at 9:24 PM Ingo Bürk wrote:
>
> > +1 (non-binding)
> >
> > I think it has all been said before. :-)
> >
> > On Fri, Nov 5, 2021, 17:45 Konstantin Knauf wrote:
>
Awesome demo, looking forward to these features!
I only have a minor comment: could we provide a config to enable/disable
the prompt values?
We can also discuss whether we can enable all the new features by default
to give them more exposure.
Best,
Jark
On Tue, 2 Nov 2021 at 10:48, JING ZHANG
ce login is snuyanzin
>
> On Tue, Oct 26, 2021 at 4:48 PM Jark Wu wrote:
>
> > Hi Sergey,
> >
> > Welcome contributions! You can read the FLIP introduction [1] first.
> > I think FLIP-163 is a good example for you which is a SQL Client
> > improvement we m
Hi Sergey,
Welcome contributions! You can read the FLIP introduction [1] first.
I think FLIP-163 is a good example for you which is a SQL Client
improvement we made in 1.13.
[1]:
https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals
[2]:
Jark Wu created FLINK-24607:
---
Summary: SourceCoordinator may miss to close SplitEnumerator when
failover frequently
Key: FLINK-24607
URL: https://issues.apache.org/jira/browse/FLINK-24607
Project: Flink
gt; each connector as Arvid suggested. IMO we shouldn't block this effort on
> the stability of the APIs.
>
> Cheers,
>
> Konstantin
>
>
>
> On Wed, Oct 20, 2021 at 8:56 AM Jark Wu wrote:
>
>> Hi,
>>
>> I think Thomas raised very good questions and would like to
Hi,
I think Thomas raised very good questions and would like to know your
opinions if we want to move connectors out of flink in this version.
(1) is the connector API already stable?
> Separate releases would only make sense if the core Flink surface is
> fairly stable though. As evident from
Jark Wu created FLINK-24512:
---
Summary: Allow metadata columns can also be part of primary key
Key: FLINK-24512
URL: https://issues.apache.org/jira/browse/FLINK-24512
Project: Flink
Issue Type: Bug
Jark Wu created FLINK-24511:
---
Summary: "sink.parallelism" doesn't work for upsert "values" sink
Key: FLINK-24511
URL: https://issues.apache.org/jira/browse/FLINK-24511
Project: Flink
Hi Francesco,
Do you have a use case for "Future>"?
When do users need this?
Best,
Jark
On Mon, 11 Oct 2021 at 21:27, Francesco Guardiani
wrote:
> Hi all,
> Looking at the TableResult type I was wondering, given the interface has
> await methods, should we instead implement the JDK's Future
Hi Nikolay,
I have assigned this ticket to you. Welcome contribution.
Usually, you can comment under the JIRA issue, and committer will assign
the issue to you.
Best,
Jark
On Fri, 8 Oct 2021 at 16:32, Nikolay Izhikov wrote:
> Hello.
>
> I want to contribute to the Flink.
>
> FLINK-24389 [1]
wo
> > releases into the future might be a good rule of thumb.
> >
> > Best, Piotrek
> >
> > [1] "[DISCUSS] Dealing with deprecated and legacy code in Flink" on the
> dev
> > mailing list
> >
> >
> > pt., 1 paź 2021 o 16:56 Jark W
> It doesn't make sense to keep them PublicEvolving on the annotation but
> implicitly assume them to be Public.
>
> @Jark Wu I don't see a way to revert the change of
> SourceReaderContext#metricGroup. For now, connector devs that expose
> metrics need to release 2 versions. I
Hi all,
Nice thread and great discussion! Ecosystem is one of the most important
things
to the Flink community, we should pay more attention to API compatibility.
Marking all connector APIs @Public is a good idea, not only mark the
Table/SQL
connector APIs public, but also the new Source/Sink
Thanks Leonard for volunteering to release 1.13.3.
I can also help from the PMC side if you need.
Best,
Jark
On Fri, 1 Oct 2021 at 10:32, Leonard Xu wrote:
> > From the SQL primary key issue side, all backports are merged. Feel free
> to proceed with the release.
>
>
> The changelog ordering
+1
I think the purpose of the release note page has not changed over the
previous releases.
It's a guideline for users to upgrade flink which just contains API-like
changes.
All notable features should be included in the announcement blog which is
more visible to users.
Best,
Jark
On Mon, 13
Thanks Chesnay for the migration work,
However, I think the domain name "nightlies.apache.org" does not sound like
an official address, and the current documentation URL is a bit long
https://nightlies.apache.org/flink/flink-docs-release-1.14/.
Is it possible to migrate to
Jark Wu created FLINK-24204:
---
Summary: Failed to insert nested types using value constructed
functions
Key: FLINK-24204
URL: https://issues.apache.org/jira/browse/FLINK-24204
Project: Flink
Thanks Leonard,
I have seen many users complaining that the Flink mailing list doesn't
work (they were using Nabble).
I think this information would be very helpful.
Best,
Jark
On Mon, 6 Sept 2021 at 16:39, Leonard Xu wrote:
> Hi, all
>
> The mailing list archive service Nabble Archive was
Hi Xingcan, Timo,
Yes, flink-cdc-connector and JDBC connector also don't support larger
precision or no precision.
However, we didn't receive any users reporting this problem.
Maybe it is not very common that precision is higher than 38 or without
precision.
I think it makes sense to support
Jark Wu created FLINK-24026:
---
Summary: Fix FLIP-XXX link can't be recognized correctly by IDEA
Key: FLINK-24026
URL: https://issues.apache.org/jira/browse/FLINK-24026
Project: Flink
Issue Type
+1 to merge this PR for 1.14 release.
This is a widely used and stable feature, so it would be nice to be enabled
by default.
Best,
Jark
On Tue, 24 Aug 2021 at 17:02, Jingsong Li wrote:
> Hi everyone,
>
> Details about FLINK-23755:
>
> Before this JIRA, the below SQL will throw an exception:
Jark Wu created FLINK-23848:
---
Summary: PulsarSourceITCase is failed on Azure
Key: FLINK-23848
URL: https://issues.apache.org/jira/browse/FLINK-23848
Project: Flink
Issue Type: Improvement
+1 (binding)
- checked/verified signatures and hashes
- started cluster, ran examples, verified web ui and log output, nothing
unexpected
- started cluster and ran some e2e sql queries using sql-client, looks good:
- read from kafka source, aggregate, write into mysql
- read from kafka source
Congratulations Yang Wang!
Best,
Jark
On Wed, 7 Jul 2021 at 10:09, Xintong Song wrote:
> Hi everyone,
>
> On behalf of the PMC, I'm very happy to announce Yang Wang as a new Flink
> committer.
>
> Yang has been a very active contributor for more than two years, mainly
> focusing on Flink's
Congratulations Guowei!
Best,
Jark
On Wed, 7 Jul 2021 at 09:54, XING JIN wrote:
> Congratulations, Guowei~ !
>
> Best,
> Jin
>
> Xintong Song 于2021年7月7日周三 上午9:37写道:
>
> > Congratulations, Guowei~!
> >
> > Thank you~
> >
> > Xintong Song
> >
> >
> >
> > On Wed, Jul 7, 2021 at 9:31 AM Qingsheng
Thanks Timo for the great work!
It's a milestone that we finally finished the blink merge work!
Cheers,
Jark
On Wed, 7 Jul 2021 at 09:42, Leonard Xu wrote:
> Thanks Timo for the great work.
>
> Developers and user can finally care about only one planner after Flink
> 1.14 released.
>
>
cleaned accurately by watermark.
We don't need to expose `table.exec.emit.late-fire.enabled` on docs and
can remove it in the next version.
Best,
Jark
On Thu, 1 Jul 2021 at 21:20, Jark Wu wrote:
> Thanks Jing for bringing up this topic,
>
> The emit strategy configs are annotated as Exp
Thanks Jing for bringing up this topic,
The emit strategy configs are annotated as Experiential and not public on
documentations.
However, I see this is a very useful feature which many users are looking
for.
I have posted these configs for many questions like "how to handle late
events in SQL".
Thanks to Xintong for bringing up this topic, I'm +1 in general.
However, I think it's still not very clear how we address the unstable
tests.
I think this is a very important part of this new guideline.
According to the discussion above, if some tests are unstable, we can
manually disable it.
Hi,
`TIMESTAMP_WITH_TIME_ZONE` is not supported in the Flink SQL engine,
even though it is listed in the type API.
I think what you are looking for is the RawValueType which can be used as
user-defined type. You can use `DataTypes.RAW(TypeInformation)` to define
a Raw type with the given
+1 (binding)
Best,
Jark
On Mon, 21 Jun 2021 at 22:51, Timo Walther wrote:
> +1 (binding)
>
> Thanks for driving this.
>
> Regards,
> Timo
>
> On 21.06.21 13:24, Ingo Bürk wrote:
> > Hi everyone,
> >
> > thanks for all the feedback so far. Based on the discussion[1] we seem to
> > have
Jark Wu created FLINK-23012:
---
Summary: Add v1.13 docs link in "Pick Docs Version" for master
branch
Key: FLINK-23012
URL: https://issues.apache.org/jira/browse/FLINK-23012
Project: Flink
Congratulations, Xintong! Well deserved!
Best,
Jark
On Wed, 16 Jun 2021 at 21:06, Leonard Xu wrote:
>
> Congratulations, Xintong!
>
>
> Best,
> Leonard
> > 在 2021年6月16日,20:07,Till Rohrmann 写道:
> >
> > Congratulations, Xintong!
> >
> > Cheers,
> > Till
> >
> > On Wed, Jun 16, 2021 at 1:47 PM
Thanks Ingo for picking up this FLIP.
FLIP-129 is an important piece to have a complete Table SQL story,
and users have been waiting for a long time. Let's finish it in this
release!
Your proposed changes look good to me.
I also cc'd people who voted in previous FLIP-129.
Best,
Jark
On Thu, 10
Jark Wu created FLINK-22936:
---
Summary: Support column comment in Schema and ResolvedSchema
Key: FLINK-22936
URL: https://issues.apache.org/jira/browse/FLINK-22936
Project: Flink
Issue Type: New
Thanks Xintong for the summary,
I'm big +1 for this feature.
Xintong's summary for Table/SQL's needs is correct.
The "custom (broadcast) event" feature is important to us
and even blocks further awesome features and optimizations in Table/SQL.
I also discussed offline with @Yun Gao several
+1 (binding)
Best,
Jark
On Thu, 3 Jun 2021 at 21:34, Dawid Wysakowicz
wrote:
> +1 (binding)
>
> Best,
>
> Dawid
>
> On 03/06/2021 03:50, Zhou, Brian wrote:
> > +1 (non-binding)
> >
> > Thanks Eron, looking forward to seeing this feature soon.
> >
> > Thanks,
> > Brian
> >
> > -Original
Thanks Danny for starting the discussion of extending CTAS syntax.
I think this is a very useful feature for data integration and ETL jobs (a
big use case of Flink).
Many users complain a lot that manually defining schemas for sources and
sinks is hard.
CTAS helps users to write ETL jobs without
her Koitawala
>
> On Wed, May 5, 2021 at 3:53 PM Jark Wu wrote:
>
> > Hi Taher,
> >
> > Currently, Flink (SQL) CDC doesn't support automatic schema change
> > and doesn't support to consume schema change events in source.
> > But you can upgrade schema manua
Hi Taher,
Currently, Flink (SQL) CDC doesn't support automatic schema change
and doesn't support to consume schema change events in source.
But you can upgrade schema manually, for example, if you have a table
with columns [a, b, c], you can define a flink table t1 with these 3
columns.
When you
Jark Wu created FLINK-22540:
---
Summary: Remove YAML environment file support in SQL Client
Key: FLINK-22540
URL: https://issues.apache.org/jira/browse/FLINK-22540
Project: Flink
Issue Type: Sub
+1 (binding)
- checked/verified signatures and hashes
- started cluster and run some e2e sql queries using SQL Client, results
are as expect:
* read from kafka source, window aggregate, lookup mysql database, write
into elasticsearch
* window aggregate using legacy window syntax and new window
Jark Wu created FLINK-22523:
---
Summary: TUMBLE TVF should throw helpful exception when specifying
second interval parameter
Key: FLINK-22523
URL: https://issues.apache.org/jira/browse/FLINK-22523
Project
Jark Wu created FLINK-22522:
---
Summary: BytesHashMap has many verbose logs
Key: FLINK-22522
URL: https://issues.apache.org/jira/browse/FLINK-22522
Project: Flink
Issue Type: Bug
Hi Etienne,
AFAIK, only blink planner (and batch mode) support TPCDS benchmarks
and blink planner does support global ORDER BY.
The docs link you mentioned above refers to the DataSet API,
blink planner implements batch mode on DataStream (actually low-level
StreamOperator)
instead of DataSet
committer!
Cheers,
Jark Wu
I also think this is an attractive feature which exposes Flink's CDC engine
to wider users and wider usage scenarios.
I'm also fine with merging it to release 1.13.
Best,
Jark
On Wed, 21 Apr 2021 at 17:00, Dawid Wysakowicz
wrote:
> Hi Timo,
>
> First of all, thanks for giving a good example of
Jark Wu created FLINK-22356:
---
Summary: Filesystem/Hive partition file is not committed when
watermark is applied on rowtime of TIMESTAMP_LTZ type
Key: FLINK-22356
URL: https://issues.apache.org/jira/browse/FLINK-22356
Jark Wu created FLINK-22354:
---
Summary: Failed to define watermark on computed column of
CURRENT_TIMESTAMP and LOCALTIMESTAMP
Key: FLINK-22354
URL: https://issues.apache.org/jira/browse/FLINK-22354
Project
Jark Wu created FLINK-22349:
---
Summary: LOCALTIMESTAMP doesn't return correct result when setting
session time zone to 'UTC-09:00'
Key: FLINK-22349
URL: https://issues.apache.org/jira/browse/FLINK-22349
Jark Wu created FLINK-22346:
---
Summary: sql-client-defaults.yaml should be removed from dist
Key: FLINK-22346
URL: https://issues.apache.org/jira/browse/FLINK-22346
Project: Flink
Issue Type: Bug
Jark Wu created FLINK-22318:
---
Summary: Support RENAME column name for ALTER TABLE statement
Key: FLINK-22318
URL: https://issues.apache.org/jira/browse/FLINK-22318
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-22319:
---
Summary: Support RESET table option for ALTER TABLE statement
Key: FLINK-22319
URL: https://issues.apache.org/jira/browse/FLINK-22319
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-22317:
---
Summary: Support DROP column/constraint/watermark for ALTER TABLE
statement
Key: FLINK-22317
URL: https://issues.apache.org/jira/browse/FLINK-22317
Project: Flink
301 - 400 of 1589 matches
Mail list logo