Hi everyone, Hi Fabian,
I am also in favor of option 1.
Besides the playgrounds it is a good opportunity to explore this process
for official Docker images as Till suggested. This needs a separate
discussion, though.
Best,
Konstantin
On Fri, Aug 9, 2019 at 5:25 AM Yang Wang wrote:
> Hey Fabi
Hi Team,
We are trying to tune our application / flink settings. Could you please advise
if there is any best practice or calculator to find out the optimal settings of
the Flink system conf file. e.g. by providing the CPU and Memory it should give
the settings required. We are using Flink 1.4.
Hi Xuefu,
As others have mentioned, time based releases were decided because of some good
reasons (more predictability for users so that they can take that into account
for planning next upgrades). That doesn’t mean we can not allow for some
flexibility: we do, we have already postponed this fe
Hi,
Re Jark’s:
> Ad. 2 I can't see how unstable connectors tests can be fixed more quickly
> after moved to a separate repositories.
It’s more about probability of intermittent failures across all of the modules
adding up, causing whole build to fail almost all the time. With separate
reposit
Jark Wu created FLINK-13661:
---
Summary: Add a stream specific CREATE TABLE SQL DDL
Key: FLINK-13661
URL: https://issues.apache.org/jira/browse/FLINK-13661
Project: Flink
Issue Type: Sub-task
MalcolmSanders created FLINK-13660:
--
Summary: Cannot submit job on Flink session cluster on kubernetes
with multiple JM pods (zk HA) through web frontend
Key: FLINK-13660
URL: https://issues.apache.org/jira/brows
Jeff Zhang created FLINK-13659:
--
Summary: Add method listDatabases(catalog) and listTables(catalog,
database) in TableEnvironment
Key: FLINK-13659
URL: https://issues.apache.org/jira/browse/FLINK-13659
P
Hey Fabian,
Sounds great! It will be more easier to build an end-to-end play ground
with docker.
I prefer option 1.
We need to build the official docker images and push to docker hub after
every version is released. It could be used to play with docker compose,
kubernetes, etc.
Today we coul
Hi devs,
Flink uses inverted class loading by default to allow a different version of
dependencies in user codes, but currently this approach is not applied to the
client, so I’m wondering if it’s out of some special reason?
If not, I think it would be great to add inverted class loading as the
Congratulations Hequn!
Best,
Yun
--
From:Congxian Qiu
Send Time:2019 Aug. 8 (Thu.) 21:34
To:Yu Li
Cc:Haibo Sun ; dev ; Rong Rong
; user
Subject:Re: Re: [ANNOUNCE] Hequn becomes a Flink committer
Congratulations Hequn!
Best,
Co
Hi,
To subscribe dev list, you should mail to dev-subscr...@flink.apache.org
instead of dev@flink.apache.org. An automatic reply would be sent and you
just reply it to subscribe dev list.
Best,
tison.
疯琴 <35023...@qq.com> 于2019年8月9日周五 上午7:51写道:
> I didn't receive any message from you for more
I didn't receive any message from you for more than a day.
Sergei Winitzki created FLINK-13658:
---
Summary: Combine two triggers into one (for streaming windows)
Key: FLINK-13658
URL: https://issues.apache.org/jira/browse/FLINK-13658
Project: Flink
I
Hi Timo,
Thanks for sharing your opinion. By wastefulness, I meant we had planned
and done much work ending up with not being useful in the released product.
Instead of making many partial features, we'd rather make fewer but
complete features. We expected a good integration with Hive in 1.9, but
I remember that Patrick (who maintained the docker-flink images so far)
frequently raised the point that its good practice to have the images
decoupled from the project release cycle.
Changes to the images can be done frequently and released fast that way.
In addition, one typically supports image
Again, feature freeze is not about "what was planned", it is a about what
is ready. Otherwise, it is completely unplannable when a release would come.
Everyone has a pet feature they want to see in. If everyone just makes
decisions by themselves and pushes, we can never get anywhere.
Disagreement
Hi,
First of all, I agree with Dawid and David's point.
I will share some experience on the repository split. We have been through
it for Alibaba Blink, which is the most worthwhile project to learn from I
think.
We split Blink project into "blink-connectors" and "blink", but we didn't
get much b
Hi, all
sorry to resend a email with correct title .
Found a fatal bug starting from Flink 1.6, which cause Flink Table API can
not correctly extract table schema .
Jira
https://issues.apache.org/jira/projects/FLINK/issues/FLINK-13603?filter=allopenissues
there is a change on Flink-core -> Ro
I would be in favour of option 1.
We could also think about making the flink-playgrounds and the Flink docker
image release part of the Flink release process [1] if we don't want to
have independent release cycles. I think at the moment the official Flink
docker image is too often forgotten.
[1]
Hi, all
Found a fatal bug starting from Flink 1.6, which cause Flink Table API can
not correctly extract table schema .
Jira
https://issues.apache.org/jira/projects/FLINK/issues/FLINK-13603?filter=allopenissues
there is a change on Flink-core -> RowTypeInfo -> hashcode , this change
make Flin
I pretty much agree with your points Dav/wid. Some problems which we want
to solve with a respository split are clearly caused by the existing build
system (no incremental builds, not enough flexibility to only build a
subset of modules). Given that a repository split would be a major
endeavour wit
Hey Fabian,
I support option 1.
As per FLIP-42, playgrounds are going to become core to flinks getting started
experience and I believe it is worth the effort to get this right.
- As you mentioned, we may (and in my opinion definitely will) add more images
in the future. Setting up an integr
Hey,
I retract my +1 (at least temporarily, until we discuss about alternative
solutions).
>> I would like to also raise an additional issue: currently quite some bugs
>> (like release blockers [1]) are being discovered by ITCases of the
>> connectors. It means that at least initially, the ma
One more thing to add.
If we move the code to flink-playgrounds and build custom images, the
playgrounds effort won't be tied to the Flink 1.9 release any more.
So, we'd be a bit more flexible time-wise but would also need to manually
update the playgrounds for every release.
Am Do., 8. Aug. 2019
OK, let's stop the discussion about the playground in the release 1.9
thread.
I've started a new thread on dev@f.a.o to continue the discussion [1].
Best, Fabian
[1]
https://lists.apache.org/thread.html/4f54c0b4162e3db8626afdca5c354050282282d3cc229d01f2d8ca3e@%3Cdev.flink.apache.org%3E
Am Do., 8
Hi everyone,
As you might know, some of us are currently working on Docker-based
playgrounds that make it very easy for first-time Flink users to try out
and play with Flink [0].
Our current setup (still work in progress with some parts merged to the
master branch) looks as follows:
* The playgro
Hi Xuefu,
I disagree with "all those work would be wasted/useless", it would just
take effect 3 months later.
Regarding "I don't see eye to eye on how and when we had decided a
feature freeze", there was an official [ANNOUNCE] email that targeted
June 28 [1]. I think nobody is super strict a
+1 for the motivation, -1 for the solution as all of the problems mention
above can be addressed with the mono-repo as well.
Multiple repositories:
1) This creates a big pain in case of change that targets code base in
multiple repositories. Change needs to be split in multiple PRs, that need
to b
Hi xintong,
Thanks for your detailed proposal. After all the memory configuration are
introduced, it will be more powerful to control the flink memory usage. I
just have few questions about it.
- Native and Direct Memory
We do not differentiate user direct memory and native memory. They ar
First of all I don't have much(if not at all) experience with working
with a multi repository project of Flink's size. I would like to mention
a few thoughts of mine, though. In general I am slightly against
splitting the repository. I fear that what we actually want to do is to
introduce double st
Hi all,
I understand the merged PR is a feature, but it's something we had planned
and requested for a long time. In fact, at Hive connector side, we have
done a lot of work (supporting hive udf). Without this PR, all those work
would be wasted and Hive feature itself in 1.9 would also be close t
Congratulations Hequn!
Best,
Congxian
Yu Li 于2019年8月8日周四 下午2:02写道:
> Congratulations Hequn! Well deserved!
>
> Best Regards,
> Yu
>
>
> On Thu, 8 Aug 2019 at 03:53, Haibo Sun wrote:
>
>> Congratulations!
>>
>> Best,
>> Haibo
>>
>> At 2019-08-08 02:08:21, "Yun Tang" wrote:
>> >Congratulations
Jark Wu created FLINK-13657:
---
Summary: Remove FlinkJoinToMultiJoinRule pull-in from Calcite
Key: FLINK-13657
URL: https://issues.apache.org/jira/browse/FLINK-13657
Project: Flink
Issue Type: Sub-ta
Jark Wu created FLINK-13656:
---
Summary: Upgrade Calcite dependency to 1.21
Key: FLINK-13656
URL: https://issues.apache.org/jira/browse/FLINK-13656
Project: Flink
Issue Type: Improvement
Co
Hi Till,
we will try to find another way to make the playground available for users
soon. The discussion of and how to split up the Flink Repository started
only after we discussed the playground and flink-playgrounds repositories.
I think, this is the reason we went this way, not necessarily conv
Thanks for the update and driving the discussion Becket!
+1 for starting a vote.
Am Mi., 7. Aug. 2019 um 11:44 Uhr schrieb Becket Qin :
> Thanks Stephan.
>
> I think we have resolved all the comments on the wiki page. There are two
> minor changes made to the bylaws since last week.
> 1. For 2/3
LiJun created FLINK-13655:
-
Summary: Caused by: java.io.IOException: Thread 'SortMerger
spilling thread' terminated due to an exception
Key: FLINK-13655
URL: https://issues.apache.org/jira/browse/FLINK-13655
Xiangfu Lee created FLINK-13654:
---
Summary: Wrong word used in comments in the class
Key: FLINK-13654
URL: https://issues.apache.org/jira/browse/FLINK-13654
Project: Flink
Issue Type: Bug
Just as a short addendum, there are also benefits of having the
ClickEventCount job not being part of the Flink repository. Assume there is
a bug in the job, then you would have to wait for the next Flink release to
fix it.
On Thu, Aug 8, 2019 at 2:24 PM Till Rohrmann wrote:
> I see that keeping
I see that keeping the playground job in the Flink repository has a couple
of advantages, among other things that it's easier to keep up to date.
However, in particular in the light of the potential repository split where
we want to separate connectors from Flink core, it seems very problematic
to
Hi Till,
as Fabian said, we considered the option you mentioned, but in the end
decided that not maintaining a separate images has more advantages.
In the context of FLIP-42 we are also revisiting the examples in general
and want to clean these up a bit. So, for what it's worth, there will be an
Rui Li created FLINK-13653:
--
Summary: ResultStore should avoid using RowTypeInfo when creating
a result
Key: FLINK-13653
URL: https://issues.apache.org/jira/browse/FLINK-13653
Project: Flink
Issue
> I would like to also raise an additional issue: currently quite some
bugs (like release blockers [1]) are being discovered by ITCases of the
connectors. It means that at least initially, the main repository will
lose some test coverage.
True, but I think this is more a symptom of us not pro
Hi,
Thanks for proposing and writing this down Chesney.
Generally speaking +1 from my side for the idea. It will create additional pain
for cross repository development, like some new feature in connectors that need
some change in the main repository. I’ve worked in such setup before and the
t
Chesnay Schepler created FLINK-13652:
Summary: Setup instructions for creating an ARM environment
Key: FLINK-13652
URL: https://issues.apache.org/jira/browse/FLINK-13652
Project: Flink
Is
Hi Kurt,
I posted my opinion around this particular example in FLINK-13225.
Regarding the definition of "feature freeze": I think it is good to
write down more of the implicit processes that we had in the past. The
bylaws, coding guidelines, and a better FLIP process are very good steps
towar
Zhenghua Gao created FLINK-13651:
Summary: table api not support cast to decimal with precision and
scale
Key: FLINK-13651
URL: https://issues.apache.org/jira/browse/FLINK-13651
Project: Flink
The motivation for including the job as an example is to not have to
maintain a separate Docker image.
We would like to use the regular Flink 1.9 image for the playground and
avoid to maintain an image that is slightly different from the regular 1.9
image.
Maintaining the job in a different reposi
Chesnay Schepler created FLINK-13650:
Summary: Move classloading utils from CommonTestUtils with
ClassLoaderUtils
Key: FLINK-13650
URL: https://issues.apache.org/jira/browse/FLINK-13650
Project: F
Timo Walther created FLINK-13649:
Summary: Improve error message when job submission was not
successful
Key: FLINK-13649
URL: https://issues.apache.org/jira/browse/FLINK-13649
Project: Flink
Hi Stephan,
Thanks for bringing this up. I think it's important and a good time to
discuss what
does *feature freeze* really means. At least to me, seems I have some
misunderstandings with this comparing to other community members. But as
you
pointed out in the jira and also in this mail, I think
Jark Wu created FLINK-13648:
---
Summary: Support "IS NOT DISTINCT FROM" operator in lookup join
Key: FLINK-13648
URL: https://issues.apache.org/jira/browse/FLINK-13648
Project: Flink
Issue Type: New
Hi all!
I would like to bring this topic up, because we saw quite a few "secret"
post-feature-freeze feature merges.
The latest example was https://issues.apache.org/jira/browse/FLINK-13225
I would like to make sure that we are all on the same page on what a
feature freeze means and how to handle
Till Rohrmann created FLINK-13647:
-
Summary: Allow default methods and static methods to be added to
public interfaces
Key: FLINK-13647
URL: https://issues.apache.org/jira/browse/FLINK-13647
Project:
Thanks for the detailed instructions!
Best,
Kurt
On Thu, Aug 8, 2019 at 3:40 PM Fabian Hueske wrote:
> [Forking off this thread to keep the announce thread "clean"]
>
> Hi Kurt,
>
> The playground needs a bit of manual work at the moment, because 1.9 is
> not released yet.
> The docker-compose
Before backporting the playground PR to the release-1.9, I'd like to
understand why the ClickEventCount job needs to be part of the Flink
distribution. Looking at the example, it seems to only work in combination
with a Kafka cluster. Since it is not self-contained, it does not add much
value for a
[Forking off this thread to keep the announce thread "clean"]
Hi Kurt,
The playground needs a bit of manual work at the moment, because 1.9 is not
released yet.
The docker-compose and Flink configurations are still in a PR [1].
Also the Flink 1.9 Docker containers need to manually build. When 1.9
+1 to include this in 1.9.0, adding some examples doesn't look like new
feature to me.
BTW, I am also trying this tutorial based on release-1.9 branch, but
blocked by:
git clone --branch release-1.10-SNAPSHOT
g...@github.com:apache/flink-playgrounds.git
Neither 1.10 nor 1.9 exists in flink-playgr
Hi,
I worked with Konstantin and reviewed the PR.
I think the playground is a great way to get started with Flink and explore
it's recovery mechanism and unique features like savepoints.
I'm in favor of adding the required streaming example program for the 1.9
release unless there's a good technic
wangxiyuan created FLINK-13646:
--
Summary: Add ARM CI job definition scripts
Key: FLINK-13646
URL: https://issues.apache.org/jira/browse/FLINK-13646
Project: Flink
Issue Type: Sub-task
60 matches
Mail list logo