Hi Divya,
I think you may not subscribe the dev mailing list, that's why you can't
see Seth's reply.
You can subscribe dev mailing list by sending email to
dev-subscr...@flink.apache.org
I copied Seth's reply here, hope it can help you:
===
Jark Wu created FLINK-17826:
---
Summary: Add missing custom query support on new jdbc connector
Key: FLINK-17826
URL: https://issues.apache.org/jira/browse/FLINK-17826
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17807:
---
Summary: Fix the broken link "/zh/ops/memory/mem_detail.html" in
documentation
Key: FLINK-17807
URL: https://issues.apache.org/jira/browse/FLINK-17807
Proj
Jark Wu created FLINK-17798:
---
Summary: Align the behavior between the new and legacy JDBC table
source
Key: FLINK-17798
URL: https://issues.apache.org/jira/browse/FLINK-17798
Project: Flink
Issue
Jark Wu created FLINK-17797:
---
Summary: Align the behavior between the new and legacy HBase table
source
Key: FLINK-17797
URL: https://issues.apache.org/jira/browse/FLINK-17797
Project: Flink
Jark Wu created FLINK-17752:
---
Summary: Align the timestamp format with Flink SQL types in JSON
format
Key: FLINK-17752
URL: https://issues.apache.org/jira/browse/FLINK-17752
Project: Flink
Issue
Jark Wu created FLINK-17693:
---
Summary: Add createTypeInformation to DynamicTableSink#Context
Key: FLINK-17693
URL: https://issues.apache.org/jira/browse/FLINK-17693
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17689:
---
Summary: Add integeration tests for Debezium and Canal formats
Key: FLINK-17689
URL: https://issues.apache.org/jira/browse/FLINK-17689
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17647:
---
Summary: Improve new connector options exception in old planner
Key: FLINK-17647
URL: https://issues.apache.org/jira/browse/FLINK-17647
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17633:
---
Summary: Improve FactoryUtil to align with new format options keys
Key: FLINK-17633
URL: https://issues.apache.org/jira/browse/FLINK-17633
Project: Flink
Issue Type
Jark Wu created FLINK-17630:
---
Summary: Implement format factory for Avro serialization and
deserialization schema
Key: FLINK-17630
URL: https://issues.apache.org/jira/browse/FLINK-17630
Project: Flink
Jark Wu created FLINK-17629:
---
Summary: Implement format factory for JSON serialization and
deserialization schema
Key: FLINK-17629
URL: https://issues.apache.org/jira/browse/FLINK-17629
Project: Flink
Jark Wu created FLINK-17625:
---
Summary: Fix ArrayIndexOutOfBoundsException in
AppendOnlyTopNFunction
Key: FLINK-17625
URL: https://issues.apache.org/jira/browse/FLINK-17625
Project: Flink
Issue
erator is the replacement for the open() method. This is
> the
> > > same strategy that was followed for StreamOperatorFactory, which was
> > > introduced to allow code generation in the Table API [1]. If we need
> > > metrics or other things we would add that as a par
Hi,
Regarding to the `open()/close()`, I think it's necessary for Table&SQL to
compile the generated code.
In Table&SQL, the watermark strategy and event-timestamp is defined using
SQL expressions, we will
translate and generate Java code for the expressions. If we have
`open()/close()`, we don't
+1 to return emty iterator and align the implementations.
Best,
Jark
On Sat, 9 May 2020 at 19:18, SteNicholas wrote:
> Hi Tang Yun,
> I agree with the point you mentioned that align these internal
> behavior
> to return empty iterator instead of null. In my opinion,
> StateMapViewWithKeysN
Jark Wu created FLINK-17591:
---
Summary: TableEnvironmentITCase.testExecuteSqlAndToDataStream
failed
Key: FLINK-17591
URL: https://issues.apache.org/jira/browse/FLINK-17591
Project: Flink
Issue
lso support temporal tables derived from an append-only
>> stream, we either need to support TEMPORAL VIEW (as mentioned by Fabian)
>> or
>> need to have a way to convert an append-only table into a changelog table
>> (briefly discussed in [1]). It is not completely clear to m
Hi Lec,
You can use `StreamTableEnvironment#toRetractStream(table, Row.class)` to
get a `DataStream>`.
The true Boolean flag indicates an add message, a false flag indicates a
retract (delete) message. So you can just simply apply
a flatmap function after this to ignore the false messages. Then y
the formats like parquet and orc,
> > Not just Flink itself, but this way also let Flink keys compatible with
> the
> > property keys of Hadoop / Hive / Spark.
> >
> > And like Jark said, this way works for Kafka key value too.
> >
> > Best,
> > Jingsong
user experience and good coding style we should be consistent in Flink
> > > connectors and configuration. Because implementers of new connectors
> > > will copy the design of existing ones.
> > >
> > > Furthermore, I could image that people in the DataStream API would also
&g
Jark Wu created FLINK-17528:
---
Summary: Use getters instead of RowData#get() utility in
JsonRowDataSerializationSchema
Key: FLINK-17528
URL: https://issues.apache.org/jira/browse/FLINK-17528
Project: Flink
Jark Wu created FLINK-17526:
---
Summary: Support AVRO serialization and deseriazation schema for
RowData type
Key: FLINK-17526
URL: https://issues.apache.org/jira/browse/FLINK-17526
Project: Flink
Jark Wu created FLINK-17525:
---
Summary: Support to parse millisecond and nanoseconds for TIME
type in CSV and JSON format
Key: FLINK-17525
URL: https://issues.apache.org/jira/browse/FLINK-17525
Project
plicate on each incoming record?
>
> Best,
> Andrey
>
> [1] note 2 in
>
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.html#incremental-cleanup
>
> On Wed, Apr 29, 2020 at 11:53 AM 刘大龙 wrote:
>
> >
> >
> >
&g
Big +1 to this.
Best,
Jark
On Mon, 4 May 2020 at 23:44, Till Rohrmann wrote:
> Hi everyone,
>
> due to some changes on the ASF side, we are now seeing issue and pull
> request notifications for the flink-web [1] and flink-shaded [2] repo on
> dev@flink.apache.org. I think this is not ideal sinc
lattened(key/value) representation so I agree it is not as important as
>
> in
>
> the aforementioned case. Nevertheless having a yaml based catalog or
>
> being
>
> able to have e.g. yaml based snapshots of a catalog in my opinion is
> appealing. At the same time
out the statements order, such as: no select
> > in
> > > >> the
> > > >>>>>>>>> middle,
> > > >>>>>>>>>> dml must be at tail of sql file (which may be the most case
> in
> > > >>>> product
> > > &g
Big +1 from my side.
The new structure and class names look nicer now.
Regarding to the compability problem, I have looked into the public APIs in
flink-jdbc, there are 3 kinds of APIs now:
1) new introduced JdbcSink for DataStream users in 1.11
2) JDBCAppendTableSink, JDBCUpsertTableSink, JDBCTab
gt; > > > into blackhole select a /*int*/, b /*string*/ from tableA", "insert
> > into
> > > > blackhole select a /*double*/, b /*Map*/, c /*string*/ from tableB".
> It
> > > > seems that Blackhole is a universal thing, which makes me fe
Hi,
Welcome to the community!
There is no contributor permission now, you can just comment under the JIRA
issue.
And committer will assign issue to you if no one is working on this.
Best,
Jark
On Thu, 30 Apr 2020 at 17:36, flinker wrote:
> Hi,
>
> I want to contribute to Apache Flink.
> Would
:lang-mustache-client nor
> com.github.spullara.mustache.java:compiler (and thus is also not bundling
> them).
>
> You can check this yourself by packaging the connector and comparing the
> shade-plugin output with the NOTICE file.
>
> On 30/04/2020 08:55, Jark Wu wrote:
&g
#diff-bd2211176ab6e7fa83ffeaa89481ff38
On Thu, 30 Apr 2020 at 14:44, Chesnay Schepler wrote:
> ES6 isn't bundling these dependencies.
>
> On 29/04/2020 17:29, Jark Wu wrote:
> > Looks like the ES NOTICE problem is a long-standing problem, because the
> > ES6 sql connecto
Looks like the ES NOTICE problem is a long-standing problem, because the
ES6 sql connector NOTICE also misses these dependencies.
Best,
Jark
On Wed, 29 Apr 2020 at 17:26, Robert Metzger wrote:
> Thanks for taking a look Chesnay. Then let me officially cancel the
> release:
>
> -1 (binding)
>
>
>From a user's perspective, I prefer the shorter one "format=json", because
it's more concise and straightforward. The "kind" is redundant for users.
Is there a real case requires to represent the configuration in JSON style?
As far as I can see, I don't see such requirement, and everything works
f
Jark Wu created FLINK-17462:
---
Summary: Support CSV serialization and deseriazation schema for
RowData type
Key: FLINK-17462
URL: https://issues.apache.org/jira/browse/FLINK-17462
Project: Flink
Jark Wu created FLINK-17461:
---
Summary: Support JSON serialization and deseriazation schema for
RowData type
Key: FLINK-17461
URL: https://issues.apache.org/jira/browse/FLINK-17461
Project: Flink
Hi lsyldliu,
Thanks for investigating this.
First of all, if you are using mini-batch deduplication, it doesn't support
state ttl in 1.9. That's why the tps looks the same with 1.11 disable state
ttl.
We just introduce state ttl for mini-batch deduplication recently.
Regarding to the performance
Jark Wu created FLINK-17437:
---
Summary: Use StringData instead of BinaryStringData in code
generation
Key: FLINK-17437
URL: https://issues.apache.org/jira/browse/FLINK-17437
Project: Flink
Issue
Jark Wu created FLINK-17430:
---
Summary: Support SupportsPartitioning in planner
Key: FLINK-17430
URL: https://issues.apache.org/jira/browse/FLINK-17430
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17429:
---
Summary: Support SupportsOverwrite in planner
Key: FLINK-17429
URL: https://issues.apache.org/jira/browse/FLINK-17429
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17428:
---
Summary: Support SupportsProjectionPushDown in planner
Key: FLINK-17428
URL: https://issues.apache.org/jira/browse/FLINK-17428
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17427:
---
Summary: Support SupportsPartitionPushDown in planner
Key: FLINK-17427
URL: https://issues.apache.org/jira/browse/FLINK-17427
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17426:
---
Summary: Support SupportsLimitPushDown in planner
Key: FLINK-17426
URL: https://issues.apache.org/jira/browse/FLINK-17426
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17425:
---
Summary: Supports SupportsFilterPushDown in planner
Key: FLINK-17425
URL: https://issues.apache.org/jira/browse/FLINK-17425
Project: Flink
Issue Type: Sub-task
> information in FLINK-11286, but in general I'd be supportive with defining
> watermark as close as possible from source, as it'll be easier to reason
> about. (I basically refer to timestamp assigner instead of watermark
> assigner though.)
>
> - Jungtaek Lim
>
Hi Jungtaek,
Kurt has said what I want to say. I will add some background.
Flink Table API & SQL only supports to define processing-time attribute and
event-time attribute (watermark) on source, not support to define a new one
in query.
The time attributes will pass through the query and time-base
+1 for xyz.[min|max]
This is already mentioned in the Code Style Guideline [1].
Best,
Jark
[1]:
https://flink.apache.org/contributing/code-style-and-quality-components.html#configuration-changes
On Mon, 27 Apr 2020 at 21:33, Flavio Pompermaier
wrote:
> +1 for Chesnay approach
>
> On Mon, Apr
Thanks Dian for being the release manager and thanks all who make this
possible.
Best,
Jark
On Sun, 26 Apr 2020 at 18:06, Leonard Xu wrote:
> Thanks Dian for the release and being the release manager !
>
> Best,
> Leonard Xu
>
>
> 在 2020年4月26日,17:58,Benchao Li 写道:
>
> Thanks Dian for the effor
Jark Wu created FLINK-17385:
---
Summary: Fix precision problem when converting JDBC numberic into
Flink decimal type
Key: FLINK-17385
URL: https://issues.apache.org/jira/browse/FLINK-17385
Project: Flink
+1
Thanks,
Jark
On Thu, 23 Apr 2020 at 22:36, Xintong Song wrote:
> +1
> From our side we can also benefit from the extending of feature freeze, for
> pluggable slot allocation, GPU support and perjob mode on Kubernetes
> deployment.
>
> Thank you~
>
> Xintong Song
>
>
>
> On Thu, Apr 23, 2020
Jark Wu created FLINK-17337:
---
Summary: Send UPDATE messages instead of INSERT and DELETE in
streaming join operator
Key: FLINK-17337
URL: https://issues.apache.org/jira/browse/FLINK-17337
Project: Flink
TABLE and TEMPORAL VIEW
> would be a nice-to-have feature for some later time.
>
> Cheers, Fabian
>
>
>
>
>
>
> Am Fr., 17. Apr. 2020 um 18:13 Uhr schrieb Jark Wu :
>
> > Hi Fabian,
> >
> > I think converting an append-only table into temporal
c type (not sure if views are
> > > > supported), but I guess this is fine.
> > > > NOTE: the "FOR SYSTEM_TIME AS OF x" is already supported for
> > LookupTable
> > > > Joins if x is a processing time attribute [2].
> > > >
> > &g
Congratulations Hequn!
Best,
Jark
On Fri, 17 Apr 2020 at 15:32, Yangze Guo wrote:
> Congratulations!
>
> Best,
> Yangze Guo
>
> On Fri, Apr 17, 2020 at 3:19 PM Jeff Zhang wrote:
> >
> > Congratulations, Hequn!
> >
> > Paul Lam 于2020年4月17日周五 下午3:02写道:
> >
> > > Congrats Hequn! Thanks a lot for
Hi Konstantin,
Thanks for bringing this discussion. I think temporal join is a very
important feature and should be exposed to pure SQL users.
And I already received many requirements like this.
However, my concern is that how to properly support this feature in SQL.
Introducing a DDL syntax for T
but also providing an mechanism to load connectors
> > > according
> > > to the DDLs,
> > >
> > > So I think it could be good to place connector/format jars in some
> > > dir like
> > > opt/connector which would not affect jobs by default, a
+1 (binding)
Thanks Dawid for driving this.
Best,
Jark
On Thu, 16 Apr 2020 at 15:54, Dawid Wysakowicz
wrote:
> Hi all,
>
> I would like to start the vote for FLIP-124 [1], which is discussed and
> reached a consensus in the discussion thread [2].
>
> The vote will be open until April 20th, unle
+1 for releasing 1.9.3 soon.
Thanks Dian for driving this!
Best,
Jark
On Wed, 15 Apr 2020 at 22:11, Congxian Qiu wrote:
> +1 to create a new 1.9 bugfix release. and FLINK-16576[1] has merged into
> master, filed a pr for release-1.9 already
>
> [1] https://issues.apache.org/jira/browse/FLINK-16
Jark Wu created FLINK-17169:
---
Summary: Refactor BaseRow to use RowKind instead of byte header
Key: FLINK-17169
URL: https://issues.apache.org/jira/browse/FLINK-17169
Project: Flink
Issue Type: Sub
he wrote:
> >>
> >>> Big +1.
> >>> This will improve user experience (special for Flink new users).
> >>> We answered so many questions about "class not found".
> >>>
> >>> Best,
> >>> Godfrey
> >&g
memory (1 or 2
> GB)
> > >> the
> > >> >>>> > > metaspace
> > >> >>>> > > > > > increase is more likely to cause problem.
> > >> >>>> > > > > >
> > >> &g
Jark Wu created FLINK-17157:
---
Summary: TaskMailboxProcessorTest.testIdleTime failed on travis
Key: FLINK-17157
URL: https://issues.apache.org/jira/browse/FLINK-17157
Project: Flink
Issue Type: Bug
+1 to the proposal. I also found the "download additional jar" step is
really verbose when I prepare webinars.
At least, I think the flink-csv and flink-json should in the distribution,
they are quite small and don't have other dependencies.
Best,
Jark
On Wed, 15 Apr 2020 at 15:44, Jeff Zhang w
Jark Wu created FLINK-17150:
---
Summary: Introduce Canal format to support reading canal changelogs
Key: FLINK-17150
URL: https://issues.apache.org/jira/browse/FLINK-17150
Project: Flink
Issue Type
Jark Wu created FLINK-17149:
---
Summary: Introduce Debezium format to support reading debezium
changelogs
Key: FLINK-17149
URL: https://issues.apache.org/jira/browse/FLINK-17149
Project: Flink
Hi all,
The voting time for FLIP-105 has passed. I'm closing the vote now.
There were 5 +1 votes, 3 of which are binding:
- Benchao (non-binding)
- Jark (binding)
- Jingsong Li (binding)
- zoudan (non-binding)
- Kurt (binding)
There were no disapproving votes.
Thus, FLIP-105 has been accepted.
+1 (binding)
Best,
Jark
On Sun, 12 Apr 2020 at 09:24, Benchao Li wrote:
> +1 (non-binding)
>
> Jark Wu 于2020年4月11日周六 上午11:31写道:
>
> > Hi all,
> >
> > I would like to start the vote for FLIP-105 [1], which is discussed and
> > reached a consensus in the dis
+1
Best,
Jark
On Sun, 12 Apr 2020 at 12:28, Benchao Li wrote:
> +1 (non-binding)
>
> zoudan 于2020年4月12日周日 上午9:52写道:
>
> > +1 (non-binding)
> >
> > Best,
> > Dan Zou
> >
> >
> > > 在 2020年4月10日,09:30,Danny Chan 写道:
> > >
> > > +1 from my side.
> > >
> > > Best,
> > > Danny Chan
> > > 在 2020年4月9
Hi all,
I would like to start the vote for FLIP-105 [1], which is discussed and
reached a consensus in the discussion thread [2].
The vote will be open for at least 72h, unless there is an objection or not
enough votes.
Thanks,
Jark
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-105
Sorry for the late reply,
I have some concern around "Supporting SHOW VIEWS|DESCRIBE VIEW name".
Currently, in SQL CLI, the "SHOW TABLES" will also list views and "DESCRIBE
name" can also describe a view.
Shall we remove the view support in those commands if we want to support a
dedicate "SHOW VIE
Hi Xiaogang,
I think this proposal doesn't conflict with your use case, you can still
chain a ProcessFunction after a source which emits raw data.
But I'm not in favor of chaining ProcessFunction after source, and we
should avoid that, because:
1) For correctness, it is necessary to perform the w
+1 from my side (binding)
Best,
Jark
On Fri, 10 Apr 2020 at 17:03, Timo Walther wrote:
> +1 (binding)
>
> Thanks for the healthy discussion. I think this feature can be useful
> during the development of a pipeline.
>
> Regards,
> Timo
>
> On 10.04.20 03:34, Danny Chan wrote:
> > Hi all,
> >
>
I didn't find a good name for separate option keys, because JSON is also
a format, not an encoding, but `format.format=json` is weird.
Hi everyone,
If there are no further concerns, I would like to start a voting thread by
tomorrow.
Best,
Jark
On Wed, 8 Apr 2020 at 15:37, Jark Wu wrote
Thanks Yun,
This's a great feature! I was surprised by the autolink feature yesterday
(didn't know your work at that time).
Best,
Jark
On Thu, 9 Apr 2020 at 16:12, Yun Tang wrote:
> Hi community
>
> The autolink to Flink JIRA ticket has taken effect. You could refer to the
> commit details pag
`table.dynamic-table-options.enabled` and `TableConfigOptions` sounds good
to me.
Best,
Jark
On Wed, 8 Apr 2020 at 18:59, Danny Chan wrote:
> `table.dynamic-table-options.enabled` seems fine to me, I would make a new
> `TableConfigOptions` class and put the config option there ~
>
> What do you
also
> applies to canal.
>
> Best,
> Kurt
>
>
> On Tue, Apr 7, 2020 at 11:49 AM Jark Wu wrote:
>
> > Hi everyone,
> >
> > Since this FLIP was proposed, the community has discussed a lot about the
> > first approach: introducing new TableSource and Tab
d a table factory and creates table
> source/sink
> > - There is a global config option to default disable this feature (if
> > user uses OPTIONS, an exception throws to tell open the option)
> >
> > I have updated the WIKI
> > <
> https://cwiki.apache
Jark Wu created FLINK-17028:
---
Summary: Introduce a new HBase connector with new property keys
Key: FLINK-17028
URL: https://issues.apache.org/jira/browse/FLINK-17028
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17029:
---
Summary: Introduce a new JDBC connector with new property keys
Key: FLINK-17029
URL: https://issues.apache.org/jira/browse/FLINK-17029
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17027:
---
Summary: Introduce a new Elasticsearch connector with new property
keys
Key: FLINK-17027
URL: https://issues.apache.org/jira/browse/FLINK-17027
Project: Flink
Issue
Jark Wu created FLINK-17026:
---
Summary: Introduce a new Kafka connector with new property keys
Key: FLINK-17026
URL: https://issues.apache.org/jira/browse/FLINK-17026
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17025:
---
Summary: Introduce new set of connectors using new property keys
and new factory interface
Key: FLINK-17025
URL: https://issues.apache.org/jira/browse/FLINK-17025
Project
Hi all,
The voting time for FLIP-122 has passed. I'm closing the vote now.
There were 8 +1 votes, 4 of which are binding:
- Timo (binding)
- Dawid (binding)
- Benchao Li (non-binding)
- Jingsong Li (binding)
- LakeShen (non-binding)
- Leonard Xu (non-binding)
- zoudan (non-binding)
- Jark (bindi
+1 (binding)
Best,
Jark
On Sun, 5 Apr 2020 at 16:38, zoudan wrote:
> +1 (non-binding)
>
> Best,
> Dan Zou
>
>
> > 在 2020年4月3日,10:02,LakeShen 写道:
> >
> > +1 (non-binding)
>
>
unctionality required by the interface.
> Nevertheless I am happy to hear other opinions.
>
> @all I also prefer the buffering approach. Let's wait a day or two more
> to see if others think differently.
>
> Best,
>
> Dawid
>
> On 07/04/2020 06:11, Jark Wu wrot
Jark Wu created FLINK-17015:
---
Summary: Fix NPE from NullAwareMapIterator
Key: FLINK-17015
URL: https://issues.apache.org/jira/browse/FLINK-17015
Project: Flink
Issue Type: Bug
Components
Hi Dawid,
Thanks for driving this. This is a blocker to support Debezium CDC format
(FLIP-105). So big +1 from my side.
Regarding to emitting multiple records and checkpointing, I'm also in favor
of option#1: buffer all the records outside of the checkpoint lock.
I think most of the use cases wil
[2]: http://apache-flink.147419.n8.nabble.com/SURVEY-CDC-td1910.html
On Fri, 14 Feb 2020 at 22:08, Jark Wu wrote:
> Hi everyone,
>
> I would like to start discussion about how to support interpreting
> external changelog into Flink SQL, and how to emit changelog from Flink SQL.
&g
I'm fine to disable this feature by default and avoid
whitelisting/blacklisting. This simplifies a lot of things.
Regarding to TableSourceFactory#Context#getExecutionOptions, do we really
need this interface?
Should the connector factory be aware of the properties is merged with
hints or not?
What
Hi all,
I would like to start the vote for FLIP-122 [1], which is discussed and
reached a consensus in the discussion thread [2].
The vote will be open for at least 72h, unless there is an objection or not
enough votes.
Thanks,
Timo
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-122
again.
>
> Regards,
> Timo
>
>
> On 02.04.20 14:06, Jark Wu wrote:
> > Hi Dawid,
> >
> >> How to express projections with TableSchema?
> > The TableSource holds the original TableSchema (i.e. from DDL) and the
> > pushed TableSchema represents the
Xu wrote:
> >> +1(non-binding)
> >>
> >> Best,
> >> Leonard Xu
> >>
> >>> 在 2020年3月30日,16:43,Jingsong Li 写道:
> >>>
> >>> +1
> >>>
> >>> Best,
> >>> Jingsong Lee
> &g
Congratulations to you all!
Best,
Jark
On Wed, 1 Apr 2020 at 20:33, Kurt Young wrote:
> Congratulations to you all!
>
> Best,
> Kurt
>
>
> On Wed, Apr 1, 2020 at 7:41 PM Danny Chan wrote:
>
> > Congratulations!
> >
> > Best,
> > Danny Chan
> > 在 2020年4月1日 +0800 PM7:36,dev@flink.apache.org,写道:
Hi everyone,
If there are no objections, I would like to start a voting thread by
tomorrow. So this is the last call to give feedback for FLIP-122.
Cheers,
Jark
On Wed, 1 Apr 2020 at 16:30, zoudan wrote:
> Hi Jark,
> Thanks for the proposal.
> I like the idea that we put the version in ‘connec
+1 to make blink planner as default planner.
We should give blink planner more exposure to encourage users trying out
new features and lead users to migrate to blink planner.
Glad to see blink planner is used in production since 1.9! @Benchao
Best,
Jark
On Wed, 1 Apr 2020 at 11:31, Benchao Li
Jark Wu created FLINK-16889:
---
Summary: Support converting BIGINT to TIMESTAMP for TO_TIMESTAMP
function
Key: FLINK-16889
URL: https://issues.apache.org/jira/browse/FLINK-16889
Project: Flink
+New+Factory
Please let me know if you have other questions.
Best,
Jark
On Wed, 1 Apr 2020 at 00:56, Jark Wu wrote:
> Hi, Dawid
>
> Regarding to `connector.property-version`,
> I totally agree with you we should implicitly add a "property-version=1"
> (without 'co
:
> Hi Jark,
>
> Thanks for the proposal. I'm +1 since it's more simple and clear for sql
> users.
> I have a question about this: does this affect descriptors and related
> validators?
>
> *Best Regards,*
> *Zhenghua Gao*
>
>
> On Mon, Mar 30, 2020 at
+1 from my side. This will be a very useful feature.
Best,
Jark
> 2020年3月31日 18:15,Danny Chan 写道:
>
> +1 for this feature, although the WITH syntax breaks the SQL standard, but
> it’s compatible with our CREATE TABLE syntax, seems well from my side.
>
> Best,
> Danny Chan
> 在 2020年3月31日 +080
801 - 900 of 1622 matches
Mail list logo