Andrey Zagrebin created FLINK-15946:
---
Summary: Task manager Kubernetes pods take long time to terminate
Key: FLINK-15946
URL: https://issues.apache.org/jira/browse/FLINK-15946
Project: Flink
Tzu-Li (Gordon) Tai created FLINK-15945:
---
Summary: Remove MULTIPLEX_FLINK_STATE config from Stateful
Functions
Key: FLINK-15945
URL: https://issues.apache.org/jira/browse/FLINK-15945
Project:
Hi dev,
Currently I want to remove some already deprecated methods from
TableEnvironment which annotated with @PublicEnvolving. And I also created
a discussion thread [1] to both dev and user mailing lists to gather
feedback on that. But I didn't find any matching rule in Flink bylaw [2] to
Jark Wu created FLINK-15944:
---
Summary: Resolve the potential class conflict proplem when depend
both planners
Key: FLINK-15944
URL: https://issues.apache.org/jira/browse/FLINK-15944
Project: Flink
Hi Jeff & Till!
Thanks for the feedback, this is exactly the discussion I was looking for.
The JobListener looks very promising if we can expose the JobGraph somehow
(correct me if I am wrong but it is not accessible at the moment).
I did not know about this feature that's why I added my
gkgkgk created FLINK-15943:
--
Summary: Rowtime field name cannot be the same as the json field
Key: FLINK-15943
URL: https://issues.apache.org/jira/browse/FLINK-15943
Project: Flink
Issue Type: Bug
Andrey Zagrebin created FLINK-15942:
---
Summary: Improve logging of infinite resource profile
Key: FLINK-15942
URL: https://issues.apache.org/jira/browse/FLINK-15942
Project: Flink
Issue
I have another concern which may not be closely related to this thread.
Since flink doesn't include all the necessary jars, I think it is critical
for flink to display meaningful error message when any class is missing.
e.g. Here's the error message when I use kafka but miss
including flink-json.
alright, thanks for confirming this Benchao!
On Thu, Feb 6, 2020 at 6:36 PM Benchao Li wrote:
> Hi Andrey,
>
> I noticed that 1.10 has changed to enabling background cleanup by default
> just after I posted to this email.
> So it won't affect 1.10 any more, just affect 1.9.x. We can move to the
Dawid Wysakowicz created FLINK-15941:
Summary: ConfluentSchemaRegistryCoder should not perform HTTP
requests for all request
Key: FLINK-15941
URL: https://issues.apache.org/jira/browse/FLINK-15941
Hi Andrey,
I noticed that 1.10 has changed to enabling background cleanup by default
just after I posted to this email.
So it won't affect 1.10 any more, just affect 1.9.x. We can move to the
Jira ticket to discuss further more.
Andrey Zagrebin 于2020年2月6日周四 下午11:30写道:
> Hi Benchao,
>
> Do you
I would not object given that it is rather small at the moment. However, I
also think that we should have a plan how to handle the ever growing Flink
ecosystem and how to make it easily accessible to our users. E.g. one far
fetched idea could be something like a configuration script which
Hi Benchao,
Do you observe this issue FLINK-15938 with 1.9 or 1.10?
If with 1.9, I suggest to check with 1.10.
Thanks,
Andrey
On Thu, Feb 6, 2020 at 4:07 PM Benchao Li wrote:
> Hi all,
>
> I found another issue[1], I don't know if it should be a blocker. But it
> does affects joins without
+1
On 06.02.20 05:54, Bowen Li wrote:
+1, LGTM
On Tue, Feb 4, 2020 at 11:28 PM Jark Wu wrote:
+1 form my side.
Thanks for driving this.
Btw, could you also attach a JIRA issue with the changes described in it,
so that users can find the issue through the mailing list in the future.
Best,
Hi Gyula,
Flink 1.10 introduced JobListener which is invoked after job submission and
finished. May we can add api on JobClient to get what info you needed for
altas integration.
Hi Gyula,
technically speaking the JobGraph is sent to the Dispatcher where a
JobMaster is started to execute the JobGraph. The JobGraph comes either
from the JobSubmitHandler or the JarRunHandler. Except for creating the
ExecutionGraph from the JobGraph there is not much happening on the
Hi everyone,
Thank you all for the great inputs!
I think probably what we all agree on is we should try to make a leaner
flink-dist. However, we may also need to do some compromises considering
the user experience that users don't need to download the dependencies from
different places.
YufeiLiu created FLINK-15940:
Summary: Flink TupleSerializer and CaseClassSerializer shoud
support serialize and deserialize NULL value
Key: FLINK-15940
URL: https://issues.apache.org/jira/browse/FLINK-15940
vinoyang created FLINK-15939:
Summary: Move runtime.clusterframework.overlays package into
flink-mesos module
Key: FLINK-15939
URL: https://issues.apache.org/jira/browse/FLINK-15939
Project: Flink
Hi all,
I found another issue[1], I don't know if it should be a blocker. But it
does affects joins without window in blink planner.
[1] https://issues.apache.org/jira/browse/FLINK-15938
Jeff Zhang 于2020年2月6日周四 下午5:05写道:
> Hi Jingsong,
>
> Thanks for the suggestion. It works for running it in
Benchao Li created FLINK-15938:
--
Summary: Idle state not cleaned in StreamingJoinOperator and
StreamingSemiAntiJoinOperator
Key: FLINK-15938
URL: https://issues.apache.org/jira/browse/FLINK-15938
+1. It sounds great to allow us to support zk 3.4 and 3.5.
Thanks for starting the discussion.
Best,
Hequn
On Thu, Feb 6, 2020 at 12:21 AM Till Rohrmann wrote:
> Thanks for starting this discussion Chesnay. +1 for starting a new
> flink-shaded release.
>
> Cheers,
> Till
>
> On Wed, Feb 5,
Hi Stephan,
Good idea. Just like hadoop, we can have flink-shaded-hive-uber.
Then the startup of hive integration will be very simple with one or two
pre-bundled, user just add these dependencies:
- flink-connector-hive.jar
- flink-shaded-hive-uber-.jar
Some changes are needed, but I think it
Hi Jingsong!
This sounds that with two pre-bundled versions (hive 1.2.1 and hive 2.3.6)
you can cover a lot of versions.
Would it make sense to add these to flink-shaded (with proper dependency
exclusions of unnecessary dependencies) and offer them as a download,
similar as we offer pre-shaded
Hi Chesnay,
Thanks a lot for sharing your thoughts.
>> this is not a source release by definition, since a source release must
not contain binaries. This is a convenience binary, or possibly even a
distributed-channel appropriate version of our existing convenience binary.
A user downloading
Hi Stephan,
The hive/lib/ has many jars, this lib is for execution, metastore, hive
client and all things.
What we really depend on is hive-exec.jar. (hive-metastore.jar is also
required in the low version hive)
And hive-exec.jar is a uber jar. We just want half classes of it. These
half classes
Hi Jingsong,
Thanks for the suggestion. It works for running it in IDE, but for
downstream project like Zeppelin where I will include flink jars in
classpath.
it only works when I specify the jars one by one explicitly in classpath,
using * doesn't work.
e.g.
The following command where I use *
Hi Jeff,
For FLINK-15935 [1],
I try to think of it as a non blocker. But it's really an important issue.
The problem is the class loading order. We want to load the class in the
blink-planner.jar, but actually load the class in the flink-planner.jar.
First of all, the order of class loading
There is a blocker issue: https://issues.apache.org/jira/browse/FLINK-15937
Best,
Jincheng
Jeff Zhang 于2020年2月6日周四 下午3:09写道:
> -1, I just found one critical issue
> https://issues.apache.org/jira/browse/FLINK-15935
> This ticket means user unable to use watermark in sql if he specify both
>
29 matches
Mail list logo