The lookup fashion Temporal Join[1] should be a solution for your case and
there is an ITCase as an example[2]
[1]
https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/LookupableTableSource.java
[2]
The implementation plan [1] is updated, with the following changes:
- Add default slot resource profile to
ResourceManagerGateway#registerTaskExecutor rather than #sendSlotReport.
- Swap 'TaskExecutor derive and register with default slot resource
profile' and 'Extend TaskExecutor to
hi, everyone
I think this flip is very meaningful. it supports functions that can be
shared by different catalogs and dbs, reducing the duplication of functions.
Our group based on flink's sql parser module implements create function
feature, stores the parsed function metadata and schema into
Hi folks,
In umbrella task FLINK-10232 we have introduced CREATE/DROP VIEW grammar in
our module flink-sql-parser. But we don't support view objects in neither
blink planner nor old planner.
I'd like to kick off a discussion on end to end view support in Flink SQL
in blink planner. It's helpful
@Till
Thanks for the reminding. I'll add a step for updating the web ui. I'll try
to involve Lining to help us with this step.
@Andrey
I was thinking that after we define the RM-TM interfaces in step 2, it
would be good to concurrently work on both RM and TM side. But yes, if we
finish Step 4
Great to hear.
Best,
Kurt
On Tue, Sep 17, 2019 at 11:45 AM Jun Zhang <825875...@qq.com> wrote:
>
> Hi Kurt:
> Thanks.
> When I encountered this problem, I found a File System Connector, but its
> function is not powerful enough and rich.
> I also found that it is built into Flink, there are
Hi Kurt:
Thanks.
When I encountered this problem, I found a File System Connector, but
its function is not powerful enough and rich.
I also found that it is built into Flink, there are many unit tests
that refer to it, so I dare not easily modify it to enrich its
Danny Chan created FLINK-14092:
--
Summary: Upgrade Calcite version to 1.21 for Flink SQL
Key: FLINK-14092
URL: https://issues.apache.org/jira/browse/FLINK-14092
Project: Flink
Issue Type:
In umbrella task FLINK-10232 we have introduced CREATE TABLE grammar in our new
module flink-sql-parser. And we proposed to use computed column to describe the
time attribute of process time in the design doc FLINK SQL DDL, so user may
create a table with process time attribute as follows:
Thanks to Tmo and Dawid for sharing thoughts.
It seems to me that there is a general consensus on having temp functions
that have no namespaces and overwrite built-in functions. (As a side note
for comparability, the current user defined functions are all temporary and
having no namespaces.)
Thanks. Let me clarify a bit more about my thinkings. Generally, I would
prefer we can concentrate the functionalities about connector, especially
some standard & most popular connectors, like kafka, different file
system with different formats, etc. We should make these core connectors
as
Hi Jun,
Thanks for bringing this up, in general I'm +1 on this feature. As
you might know, there is another ongoing efforts about such kind
of table sink, which covered in newly proposed partition support
reworking[1]. In this proposal, we also want to introduce a new
file system connector, which
Peng Wang created FLINK-14091:
-
Summary: Job can not trigger checkpoint forever after zookeeper
change leader
Key: FLINK-14091
URL: https://issues.apache.org/jira/browse/FLINK-14091
Project: Flink
Hi,
Another idea to consider on top of Timo's suggestion. How about we have a
special namespace (catalog + database) for built-in objects? This catalog
would be invisible for users as Xuefu was suggesting.
Then users could still override built-in functions, if they fully qualify
object with the
Hi Dawid,
thanks for the design document. It fixes big concept gaps due to
historical reasons with proper support for serializability and catalog
support in mind.
I would not mind a registerTemporarySource/Sink, but the problem that I
see is that many people think that this is the
Hi all,
thanks for your feedback.
@Stephan: Our efforts will definitely be synced with the general Flink
documentation improvements mentioned in FLIP-42. I also had an offline
discussion with Konstantin about this. Concepts such as streaming
concepts, time, order, etc. should definitely
Hi Bowen,
I understand the potential benefit of overriding certain built-in
functions. I'm open to such a feature if many people agree. However, it
would be great to still support overriding catalog functions with
temporary functions in order to prototype a query even though a
Hi Hanan,
the community is currently reworking parts of the architecture of Flink
SQL first for making it a good foundation for further tools around it
(see also FLIP-32 and following SQL-related FLIPs). In Flink 1.10 the
SQL Client will not receive major updates but it seems likely that
Bowen Li created FLINK-14090:
Summary: FLIP-57 Rework FunctionCatalog
Key: FLINK-14090
URL: https://issues.apache.org/jira/browse/FLINK-14090
Project: Flink
Issue Type: Improvement
>>> Should the table API concepts be a section in the overall concepts then?
I would say yes, but not exactly as table API concept, but for streaming SQL
concept, plus how to unify the streaming and batch from SQL's perspective.
This topic has lots of connection with underlying streaming
For user@ml, I was trying to say flink user mailing list, which is
u...@flink.apache.org, sorry for the inconvenience.
Best,
Kurt
On Mon, Sep 16, 2019 at 10:08 PM srikanth flink
wrote:
> Hi Forward,
>
> Looks like it has great content to read on but sorry about that. This is
> how i see it:
>
Hi Forward,
Looks like it has great content to read on but sorry about that. This is
how i see it:
[image: image.png]
Thanks
Srikanth
On Mon, Sep 16, 2019 at 7:30 PM Forward Xu wrote:
> Hi Srikanth,
>
>
> Here are some past developer profiles you can view.
>
>
Hi Srikanth,
Here are some past developer profiles you can view.
https://ververica.cn/developers-resources/
Many of them are cases of Flink-SQL.
Best,
Forward
srikanth flink 于2019年9月16日周一 下午9:39写道:
> Hi Kurt,
>
> thanks for quick response. Is the email user@ml?
>
> Regards
> Srikanth
>
>
@Xintong
Thanks for the feedback.
Just to clarify step 6:
If the first point is done before step 5 (e.g. as part of 4) then it is
just keeping the info about the default slot in RM's data structure
associated the TM and no real change in the behaviour.
When this info is available, I think it can
Hi Kurt,
thanks for quick response. Is the email user@ml?
Regards
Srikanth
On Mon, Sep 16, 2019 at 1:31 PM Kurt Young wrote:
> Hi Srikanth,
>
> AFAIK, there are quite some companies already using Flink streaming
> SQL to back their production systems, like realtime data warehouse. If
> you
You can try to use UDTF
-- Original --
From: srikanth flink
Hi there,
I'm working with streaming in FlinkSQL. I've two tables created one with
dynamic stream and the other a periodic updates.
I would like to keep the periodic table a static(but updates with new data
every day or so by flushing the old), So at any point of time the static
table should
One thing which was briefly mentioned in the Flip but not in the
implementation plan is the update of the web UI. I think it is worth
putting an extra item for updating the web UI to properly display the
resources a TM has still to offer with dynamic slot allocation. I guess we
need to pull in
Rui Li created FLINK-14089:
--
Summary: SQL CLI doesn't support explain DMLs
Key: FLINK-14089
URL: https://issues.apache.org/jira/browse/FLINK-14089
Project: Flink
Issue Type: Bug
John Lonergan created FLINK-14088:
-
Summary: Couchbase connector - sink with checkpointing support
Key: FLINK-14088
URL: https://issues.apache.org/jira/browse/FLINK-14088
Project: Flink
Hello everyone,
I've drafted a FLIP that describes the current design of the Pulsar connector:
https://docs.google.com/document/d/1rES79eKhkJxrRfQp1b3u8LB2aPaq-6JaDHDPJIA8kMY/edit#
Please take a look and let me know what you think.
Thanks,
Yijie
On Sat, Sep 14, 2019 at 12:08 AM Rong Rong
Flip – 24 mentions SQL GW in the roadmap . is there any progress on this path
? is something planned for 1.10 / 2.0 ?
Thanks for the comments, Andrey.
- I agree that instead of ResourceManagerGateway#sendSlotReport, we should
add the default slot resource profile to
ResourceManagerGateway#registerTaskExecutor.
- If I understand correctly, the reason you suggest do default slot
resource profile first and then do
There are also some other efforts to restructure the docs, which have
resulted until now in more quickstarts and more concepts.
IIRC there is the goal to have a big section on concepts for the whole
system: streaming concepts, time, order, etc.
The API docs would be really more about an API
Hi Xintong,
Thanks for sharing the implementation steps. I also think they makes sense
with the feature option.
I was wondering if we could order the steps in a way that each change does
not affect other components too much, always having a working system
then maybe the feature option does not
luojiangyu created FLINK-14087:
--
Summary: throws java.lang.ArrayIndexOutOfBoundsException when
emiting the data using RebalancePartitioner.
Key: FLINK-14087
URL: https://issues.apache.org/jira/browse/FLINK-14087
Hi Dipanjan,
not every configuration options in the flink-conf.yaml are relevant for the
SQL client. If you submit to an already existing cluster, then you only
need to learn about the address and the port or if it is using high
availability where ZooKeeper is running. However, in the general
Thanks a lot for letting the community know Anand. Great to see that
Flink's open source ecosystem is growing. Have you reached out to Robert or
Becket in order to include the flinkk8soperator project on Flink's
ecosystem website [1]?
I've cross posted this thread to the user ML so that Flink
Hi everyone,
In flink 1.9, we have introduced some awesome features such as complete catalog
support[1] and sql ddl support[2]. These features have been a critical
integration for Flink to be able to manage data and metadata like a classic
RDBMS and make developers more easy to construct their
wangxiyuan created FLINK-14086:
--
Summary: PrometheusReporterEndToEndITCase doesn't support ARM arch
Key: FLINK-14086
URL: https://issues.apache.org/jira/browse/FLINK-14086
Project: Flink
Issue
After some review and discussion in the google document, I think it's time
to
convert this design to a cwiki flip page and start voting process.
Best,
Kurt
On Mon, Sep 9, 2019 at 7:46 PM Jark Wu wrote:
> Hi all,
>
> Thanks all for so much feedbacks received in the doc so far.
> I saw a
AFAIK, Flink SQL is really pretty strong for production only and only if
you know what queries you are running. Further, if you open up Flink SQL to
the end-users then NO. As Flink SQL is still not that mature and still not
that rich in terms of functionalities like Spark SQL.
On Mon, Sep 16,
Hi Srikanth,
AFAIK, there are quite some companies already using Flink streaming
SQL to back their production systems, like realtime data warehouse. If
you met some issues when trying streaming sql, I would suggest you to
send the problem to user@ml, where you can receive some helps.
Best,
Kurt
+1 to this feature, I left some comments on google doc.
Another comment is I think we should do some reorganize about the content
when you converting this to a cwiki page. I will have some offline
discussion
with you.
Since this feature seems to be a fairly big efforts, so I suggest we can
Junling created FLINK-14085:
---
Summary: schema changes after using "create view" command via
flink-table-sql-client
Key: FLINK-14085
URL: https://issues.apache.org/jira/browse/FLINK-14085
Project: Flink
Hi there,
I came across Flink and FlinkSQL and using FlinkSQL for stream processing.
Flink runs as 3 node cluster with embedded Zookeeper, given heap 80GB on
each. I came across few issues and would like to get some clarification.
- Job1: Using Flink(java) to read and flatten my JSON and
Hi Stephan,
- I'm using streaming SQL
- 1.9 version as recommended by Flink to have a updated stable version.
Thanks
Srikanth
On Fri, Sep 13, 2019 at 8:30 PM Stephan Ewen wrote:
> Can you share some more details?
>
> - are you running batch SQL or streaming SQL
> - are you running the
Jingsong Lee created FLINK-14084:
Summary: Support ZonedTimestampType
Key: FLINK-14084
URL: https://issues.apache.org/jira/browse/FLINK-14084
Project: Flink
Issue Type: Sub-task
Jingsong Lee created FLINK-14083:
Summary: Support StructuredType
Key: FLINK-14083
URL: https://issues.apache.org/jira/browse/FLINK-14083
Project: Flink
Issue Type: Sub-task
Jingsong Lee created FLINK-14082:
Summary: Support precision of DayTimeIntervalType and
YearMonthIntervalType
Key: FLINK-14082
URL: https://issues.apache.org/jira/browse/FLINK-14082
Project: Flink
Jingsong Lee created FLINK-14080:
Summary: Support precision of TimestampType
Key: FLINK-14080
URL: https://issues.apache.org/jira/browse/FLINK-14080
Project: Flink
Issue Type: Sub-task
Jingsong Lee created FLINK-14081:
Summary: Support precision of TimeType
Key: FLINK-14081
URL: https://issues.apache.org/jira/browse/FLINK-14081
Project: Flink
Issue Type: Sub-task
Jingsong Lee created FLINK-14079:
Summary: Full data type support in planner
Key: FLINK-14079
URL: https://issues.apache.org/jira/browse/FLINK-14079
Project: Flink
Issue Type: New Feature
53 matches
Mail list logo