Thanks Gordon for bringing this up.
I'm glad to say that blink planner merge work is almost done, and i will
follow up the work of
integrating blink planner with Table API to co-exist with current flink
planner.
In addition to this, the following features:
1. FLIP-32: Restructure flink-table for
Sean Bollin created FLINK-12638:
---
Summary: Expose max parallelism via the REST API and via the Web UI
Key: FLINK-12638
URL: https://issues.apache.org/jira/browse/FLINK-12638
Project: Flink
aitozi created FLINK-12637:
--
Summary: Add floatingBufferUsage and exclusiveBufferUsage for
credit based mode
Key: FLINK-12637
URL: https://issues.apache.org/jira/browse/FLINK-12637
Project: Flink
Hi there,
I faced with issue in adding file to distributed cache in Flink.
My setup:
- Java 1.8
- Flink 1.8
- OS: Windows, Linux
Test scenario:
1. Create simple stream environment
2. Add to distributed cache local file
3. Add simple source
+1 (non-binding)
- Release notes are correct.
- Built from source archive successfully.
- Signatures and hash are correct.
- All artifacts(11 artifacts including flink-shaded) have been deployed to
the maven central repository.
- javax annotations are not included in the slim jar any more.
Hi Shaoxuan,
Thanks a lot for driving this. +1 to remove the module.
The git log of this module shows that it has been inactive for a long time.
I think it's ok to remove it for now. It would also be good to switch to
the new interface earlier.
Best, Hequn
On Mon, May 27, 2019 at 8:58 PM
I quickly scanned the changes and could not spot any issues.
+1
Am 27.05.19 um 13:36 schrieb Chesnay Schepler:
+1
* git tag exists
* no binaries in release
* relocated jackson no longer bundled twice in hadoop jars
* jackson dependency tree exists
* netty-tcnative-static not part of release
*
I want to contribute to Apache Flink. Would you please give me the
contributor permission?
My JIRA ID is yuwenbing
Thanks.
+1 for removal. Personally I'd prefer marking it as deprecated and remove
the module in the next release, just to follow the established procedure.
And +1 on removing the `flink-libraries/flink-ml-uber` as well.
Thanks,
Jiangjie (Becket) Qin
On Mon, May 27, 2019 at 5:07 PM jincheng sun
wrote:
Chesnay Schepler created FLINK-12636:
Summary: REST API stability test does not fail on compatible
modifications
Key: FLINK-12636
URL: https://issues.apache.org/jira/browse/FLINK-12636
Project:
Chesnay Schepler created FLINK-12635:
Summary: REST API stability test does not cover jar upload
Key: FLINK-12635
URL: https://issues.apache.org/jira/browse/FLINK-12635
Project: Flink
+1
* git tag exists
* no binaries in release
* relocated jackson no longer bundled twice in hadoop jars
* jackson dependency tree exists
* netty-tcnative-static not part of release
* artifacts present for each hadoop version
* compared contents of each hadoop jar with the current ones, no
Chesnay Schepler created FLINK-12634:
Summary: Store shading prefix in property
Key: FLINK-12634
URL: https://issues.apache.org/jira/browse/FLINK-12634
Project: Flink
Issue Type:
Hi Gordon,
Thanks for mention the feature freeze date for 1.9.0, that's very helpful
for contributors to evaluate their dev plan!
Regarding FLIP-29, we are glad to do our best to finish the dev of FLIP-29,
then catch up with the release of 1.9.
Thanks again for push the release of 1.9.0
Honestly, I'm not sure whether FLINK-12598 should be treated as a
blocker nonetheless: Although it does not affect the usage of
flink-shaded, it may affect Flink + Flink job developers during bug
hunting sessions where they need to debug into the source code. In this
scenario, you'll definitely
ok, agreed - I created a PR just in case another RC is needed anyway ;)
This is not only about the shaded hadoop sources though...for me
personally, it affects Netty - should affect anything we relocate
On 27/05/2019 11:08, Chesnay Schepler wrote:
> That would've been a good workaround, but RC3
Hi all,
I want to kindly remind the community that we're now 5 weeks away from the
proposed feature freeze date for 1.9.0, which is June 28.
This is not yet a final date we have agreed on, so I would like to start
collecting feedback on how the mentioned features are going, and in
general,
@Hequn @Jincheng
Thanks for bringing up FLIP-29 to attention.
As previously mentioned, the original list is not a fixed feature set, so
if FLIP-29 has ongoing efforts and can make it before the feature freeze,
then of course it should be included!
@himansh1306
Concerning the ORC format for
Hi All,
I obviously support this proposal, but I'd like to emphasize two points.
* I think we can significantly improve the getting-started experience with
better (and up-to-date) tutorials.
* A better structure and separation of concepts and API will be very
helpful. I noticed this when I was
It always worked for netty, why would it no longer do so?
On 27/05/2019 11:22, Nico Kruber wrote:
ok, agreed - I created a PR just in case another RC is needed anyway ;)
This is not only about the shaded hadoop sources though...for me
personally, it affects Netty - should affect anything we
xiezhiqiang created FLINK-12633:
---
Summary: flink sql-client throw No context matches
Key: FLINK-12633
URL: https://issues.apache.org/jira/browse/FLINK-12633
Project: Flink
Issue Type: Bug
That would've been a good workaround, but RC3 is already out.
I wouldn't wanna delay the release any further because of something one
can work around,
especially so since this only affects people who want the shaded hadoop
sources.
For this you'd probably get by with using the original ones
+1 for remove it!
And we also plan to delete the `flink-libraries/flink-ml-uber`, right?
Best,
Jincheng
Rong Rong 于2019年5月24日周五 上午1:18写道:
> +1 for the deletion.
>
> Also I think it also might be a good idea to update the roadmap for the
> plan of removal/development since we've reached the
Hi Ramya,
which configuration options do you wanna change at runtime? Flink's cluster
configuration is read at start up of the cluster and cannot be changed
during its lifetime. If you want to change the behaviour of an operator,
then this is possible. One way could be to use a CoMapFunction
Thanks Jark for bringing this topic. I think proper concepts is very
important for users who are using Table API & SQL. Especially for
them to have a clear understanding about the behavior of the SQL job. Also
this is essential for connector developers to have a better
understanding why we
Harshith Bolar created FLINK-12632:
--
Summary: Flink WebUI doesn't show the full error response when a
job deployment fails
Key: FLINK-12632
URL: https://issues.apache.org/jira/browse/FLINK-12632
Ji Liu created FLINK-12631:
--
Summary: Check if proper JAR file in JobWithJars
Key: FLINK-12631
URL: https://issues.apache.org/jira/browse/FLINK-12631
Project: Flink
Issue Type: Improvement
Hi all,
We have prepared a design doc [1] about source and sink concepts in Flink
SQL. This is actually an extended discussion about SQL DDL [2].
In the design doc, we want to figure out some concept problems. For
examples:
1. How to define boundedness in DDL
2. How to define a changelog in
zhijiang created FLINK-12630:
Summary: Refactor abstract InputGate to general interface
Key: FLINK-12630
URL: https://issues.apache.org/jira/browse/FLINK-12630
Project: Flink
Issue Type:
Hi,
I would like to know the best ways to read application configurations from
a Flink Job. Is MySQL Connectors supported, so that some application
related configurations can be read from SQL ??
How can re-read/re-load of configurations can be handled here ??
Can somebody help with some best
30 matches
Mail list logo