Thanks Gordon for bringing this up. I'm glad to say that blink planner merge work is almost done, and i will follow up the work of integrating blink planner with Table API to co-exist with current flink planner.
In addition to this, the following features: 1. FLIP-32: Restructure flink-table for future contributions [1] 2. FLIP-37: Rework of the Table API Type System [2] 3. Hive integration work (including hive meta [3] and connectors) are also going well, i will spend some time to keep track of them. [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions [2] https://cwiki.apache.org/confluence/display/FLINK/FLIP-37%3A+Rework+of+the+Table+API+Type+System [3] https://cwiki.apache.org/confluence/display/FLINK/FLIP-30%3A+Unified+Catalog+APIs Best, Kurt On Mon, May 27, 2019 at 7:18 PM jincheng sun <sunjincheng...@gmail.com> wrote: > Hi Gordon, > > Thanks for mention the feature freeze date for 1.9.0, that's very helpful > for contributors to evaluate their dev plan! > > Regarding FLIP-29, we are glad to do our best to finish the dev of FLIP-29, > then catch up with the release of 1.9. > > Thanks again for push the release of 1.9.0 forward! > > Cheers, > Jincheng > > > > Tzu-Li (Gordon) Tai <tzuli...@apache.org> 于2019年5月27日周一 下午5:48写道: > > > Hi all, > > > > I want to kindly remind the community that we're now 5 weeks away from > the > > proposed feature freeze date for 1.9.0, which is June 28. > > > > This is not yet a final date we have agreed on, so I would like to start > > collecting feedback on how the mentioned features are going, and in > > general, whether or not the date sounds reasonable given the current > status > > of the ongoing efforts. > > Please let me know what you think! > > > > Cheers, > > Gordon > > > > > > On Mon, May 27, 2019 at 5:40 PM Tzu-Li (Gordon) Tai <tzuli...@apache.org > > > > wrote: > > > > > @Hequn @Jincheng > > > > > > Thanks for bringing up FLIP-29 to attention. > > > As previously mentioned, the original list is not a fixed feature set, > so > > > if FLIP-29 has ongoing efforts and can make it before the feature > freeze, > > > then of course it should be included! > > > > > > @himansh1306 > > > > > > Concerning the ORC format for StreamingFileSink, is there already a > JIRA > > > ticket tracking that? If not, I suggest to first open one and see if > > there > > > are similar interests from committers in adding that. > > > > > > > > > On Sun, May 5, 2019 at 11:19 PM Hequn Cheng <chenghe...@gmail.com> > > wrote: > > > > > >> Hi, > > >> > > >> Great job, Gordon! Thanks a lot for driving this and wrapping features > > up > > >> to a detailed list. +1 on it! > > >> > > >> Would be great if we can also add flip29 to the list. @jincheng sun > > >> <sunjincheng...@gmail.com> and I are focusing on it these days. I > > think > > >> these features in flip29 would bring big enhancements to the Table > API. > > >> :-) > > >> > > >> Best, Hequn > > >> > > >> On Sun, May 5, 2019 at 10:41 PM Becket Qin <becket....@gmail.com> > > wrote: > > >> > > >> > Thanks for driving this release, Gordon. +1 on the feature list. > > >> > > > >> > This is a pretty exciting and ambitious release! > > >> > > > >> > Cheers, > > >> > > > >> > Jiangjie (Becket) Qin > > >> > > > >> > On Sun, May 5, 2019 at 4:28 PM jincheng sun < > sunjincheng...@gmail.com > > > > > >> > wrote: > > >> > > > >> > > Thanks a lot for being our release manager, Great job! > > >> > > > > >> > > +1 for the feature list and It's better to add FLIP-29 > > >> > > < > > >> > > > >> > > > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739 > > >> > > >(Support > > >> > > map/flatMap/aggregate/flatAggregate on TableAPI), as the goals of > > >> release > > >> > > 1.9. > > >> > > > > >> > > What do you think? > > >> > > > > >> > > Best, > > >> > > Jincheng > > >> > > > > >> > > Bowen Li <bowenl...@gmail.com> 于2019年5月5日周日 上午12:47写道: > > >> > > > > >> > > > +1, exciting and ambitious goals, the rough timeline looks > > >> reasonable. > > >> > > > Let's make it happen! > > >> > > > > > >> > > > On Sat, May 4, 2019 at 2:47 AM Jark Wu <imj...@gmail.com> > wrote: > > >> > > > > > >> > > > > +1 for the 1.9.0 feature list. Excited to see it is happening. > > >> > > > > > > >> > > > > Regards, > > >> > > > > Jark > > >> > > > > > > >> > > > > On Thu, 2 May 2019 at 17:07, himansh1...@gmail.com < > > >> > > > himansh1...@gmail.com> > > >> > > > > wrote: > > >> > > > > > > >> > > > > > +1 for Protobuf, Hive Metastore integration & features > related > > >> to > > >> > > > > > savepoint. > > >> > > > > > > > >> > > > > > I was hoping if we could support for ORC File format can be > > >> added > > >> > in > > >> > > > > > StreamingFileSink Writer, Currently only Parquet is > supported > > >> with > > >> > > > > respect > > >> > > > > > to columnar file formats > > >> > > > > > > > >> > > > > > > > >> > > > > > On 2019/05/01 05:15:23, "Tzu-Li (Gordon) Tai" < > > >> tzuli...@apache.org > > >> > > > > >> > > > > > wrote: > > >> > > > > > > Hi community, > > >> > > > > > > > > >> > > > > > > Apache Flink 1.8.0 has been released a few weeks ago, so > > >> > naturally, > > >> > > > > it’s > > >> > > > > > > time to start thinking about what we want to aim for > 1.9.0. > > >> > > > > > > > > >> > > > > > > Kurt and I had collected some features that would be > > >> reasonable > > >> > to > > >> > > > > > consider > > >> > > > > > > including for the next release, based on talking with > > various > > >> > > people > > >> > > > as > > >> > > > > > > well as observations from mailing list discussions and > > >> questions. > > >> > > > > > > > > >> > > > > > > Note that having specific features listed here does not > mean > > >> that > > >> > > no > > >> > > > > > other > > >> > > > > > > pull requests or topics will be reviewed. I am sure that > > there > > >> > are > > >> > > > > other > > >> > > > > > > ongoing efforts that we missed here and will likely make > it > > >> as an > > >> > > > > > > improvement or new feature in the next release. This > > >> discussion > > >> > is > > >> > > > > merely > > >> > > > > > > for bootstrapping a discussion for 1.9, as well as to give > > >> > > > contributors > > >> > > > > > an > > >> > > > > > > idea of what the community is looking to focus on in the > > next > > >> > > couple > > >> > > > of > > >> > > > > > > weeks. > > >> > > > > > > > > >> > > > > > > *Proposed features and focus* > > >> > > > > > > > > >> > > > > > > In the previous major release, Apache Flink 1.8.0, the > > >> community > > >> > > had > > >> > > > > > > prepared for some major Table & SQL additions from the > Blink > > >> > > branch. > > >> > > > > With > > >> > > > > > > this in mind, for the next release, it would be great to > > wind > > >> up > > >> > > > those > > >> > > > > > > efforts by merging in the Blink-based Table / SQL planner > > and > > >> > > runtime > > >> > > > > for > > >> > > > > > > 1.9. > > >> > > > > > > > > >> > > > > > > Following Stephan’s previous thread [1] in the mailing > list > > >> about > > >> > > > > > features > > >> > > > > > > in Blink, we should also start focusing on preparing for > > >> Blink’s > > >> > > > other > > >> > > > > > > several enhancements for batch execution. This includes > > >> resource > > >> > > > > > > optimization, fine-grained failover, pluggable shuffle > > >> service, > > >> > > > > adapting > > >> > > > > > > stream operators for batch execution, as well as better > > >> > integration > > >> > > > > with > > >> > > > > > > commonly used systems by batch executions such as Apache > > Hive. > > >> > > > > > > > > >> > > > > > > Moreover, besides efforts related to the Blink merge, we > > would > > >> > also > > >> > > > > like > > >> > > > > > us > > >> > > > > > > to work towards pushing forward some of the most discussed > > and > > >> > > > > > anticipated > > >> > > > > > > features by the community. Most of these had discussions > in > > >> the > > >> > > > mailing > > >> > > > > > > lists that span multiple releases, and are also frequently > > >> > brought > > >> > > up > > >> > > > > in > > >> > > > > > > community events such as Flink Forward. This includes > > features > > >> > such > > >> > > > as > > >> > > > > > > source event-time alignment and the source interface > > rework, a > > >> > > > > savepoint > > >> > > > > > > connector that allows users to manipulate and query state > in > > >> > > > > savepoints, > > >> > > > > > > interactive programming, as well as terminating a job > with a > > >> > final > > >> > > > > > > savepoint. > > >> > > > > > > > > >> > > > > > > Last but not least, we have several existing contributions > > or > > >> > > > > discussions > > >> > > > > > > for the ecosystem surrounding Flink, which we think is > also > > >> very > > >> > > > > valuable > > >> > > > > > > to try to merge in for 1.9. This includes a web UI rework > > >> > (recently > > >> > > > > > already > > >> > > > > > > merged), active K8s integration, Google PubSub connector, > > >> native > > >> > > > > support > > >> > > > > > > for the Protobuf format, Python support in the Table API, > as > > >> well > > >> > > as > > >> > > > > > > reworking Flink’s support for machine learning. > > >> > > > > > > > > >> > > > > > > To wrap this up as a list of items, some of which already > > have > > >> > > JIRAs > > >> > > > or > > >> > > > > > > mailing list threads to track them: > > >> > > > > > > > > >> > > > > > > - Merge Blink runner for Table & SQL [2] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Restructure flink-table to separate API from core > > >> runtime > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Make table planners pluggable > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Rework Table / SQL type system to integrate better > > with > > >> the > > >> > > SQL > > >> > > > > > > standard [3] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Merge Blink planner and runtime for Table / SQL > > >> > > > > > > - Further preparations for more batch execution > > >> > optimization > > >> > > > from > > >> > > > > > > Blink > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Dedicated scheduler component [4] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Fine grained failover for batch [5] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Selectable input stream operator [6] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Pluggable Shuffle Service [7] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > FLIP-30: Unified Catalog API & Hive metastore > > >> integration > > >> > [8] > > >> > > > > > > - Heavily anticipated / discussed features in the > > >> community > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > FLIP-27: Source interface rework [9] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Savepoint connector [10] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > FLIP-34: Terminate / Suspend job with savepoint [11] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > FLIP-36: Interactive Programming [12] > > >> > > > > > > - Ecosystem > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Web UI rework [13] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Active K8s integration [14] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > Google PubSub connector [15] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > First-class Protobuf support [16] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > FLIP-38: Python support in Table API [17] > > >> > > > > > > - > > >> > > > > > > > > >> > > > > > > FLIP-39: Flink ML pipeline and libraries on top of > > Table > > >> > API > > >> > > > [18] > > >> > > > > > > > > >> > > > > > > *Suggested release timeline* > > >> > > > > > > > > >> > > > > > > Apache Flink 1.8.0 was released earlier this month, so > based > > >> on > > >> > our > > >> > > > > usual > > >> > > > > > > timely release schedule, we should aim for releasing 1.9.0 > > >> around > > >> > > mid > > >> > > > > to > > >> > > > > > > end July. > > >> > > > > > > > > >> > > > > > > Since it seems that this is going to be a fairly large > > >> release, > > >> > to > > >> > > > give > > >> > > > > > the > > >> > > > > > > community enough testing time, I propose that the feature > > >> freeze > > >> > to > > >> > > > be > > >> > > > > > near > > >> > > > > > > the end of June (8-9 weeks from now, probable June 28). > This > > >> is > > >> > of > > >> > > > > > course a > > >> > > > > > > ballpark estimation for now; we should follow-up with a > > >> separate > > >> > > > thread > > >> > > > > > > later in the release cycle to prepare contributors with an > > >> > official > > >> > > > > > feature > > >> > > > > > > freeze date. > > >> > > > > > > > > >> > > > > > > I’d also like to use this opportunity to propose myself > and > > >> Kurt > > >> > as > > >> > > > the > > >> > > > > > > release managers for 1.9. > > >> > > > > > > AFAIK, we did not used to have 2 RMs for a single release > in > > >> the > > >> > > > past, > > >> > > > > > but > > >> > > > > > > 1.9.0 is definitely quite ambitious so it would not hurt > to > > >> have > > >> > > one > > >> > > > > more > > >> > > > > > > on board :) Cheers, Gordon [1] > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-1-6-features-td22632.html > > >> > > > > > > > > >> > > > > > > [2] https://issues.apache.org/jira/browse/FLINK-11439 > > >> > > > > > > > > >> > > > > > > [3] https://issues.apache.org/jira/browse/FLINK-12251 > > >> > > > > > > > > >> > > > > > > [4] https://issues.apache.org/jira/browse/FLINK-10429 > > >> > > > > > > > > >> > > > > > > [5] > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Backtracking-for-failover-regions-td28293.html > > >> > > > > > > > > >> > > > > > > [6] > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Enhance-Operator-API-to-Support-Dynamically-Selective-Reading-and-EndOfInput-Event-td26753.html > > >> > > > > > > > > >> > > > > > > [7] https://issues.apache.org/jira/browse/FLINK-10653 > > >> > > > > > > > > >> > > > > > > [8] https://issues.apache.org/jira/browse/FLINK-11275 > > >> > > > > > > > > >> > > > > > > [9] > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-27-Refactor-Source-Interface-td24952i20.html > > >> > > > > > > > > >> > > > > > > [10] https://issues.apache.org/jira/browse/FLINK-12047 > > >> > > > > > > > > >> > > > > > > [11] > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-33-Terminate-Suspend-Job-with-Savepoint-td26927.html > > >> > > > > > > > > >> > > > > > > [12] > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-36%3A+Support+Interactive+Programming+in+Flink > > >> > > > > > > > > >> > > > > > > [13] https://issues.apache.org/jira/browse/FLINK-10705 > > >> > > > > > > > > >> > > > > > > [14] https://issues.apache.org/jira/browse/FLINK-9953 > > >> > > > > > > > > >> > > > > > > [15] https://issues.apache.org/jira/browse/FLINK-9311 > > >> > > > > > > > > >> > > > > > > [16] https://issues.apache.org/jira/browse/FLINK-11333 > > >> > > > > > > > > >> > > > > > > [17] > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-38-Support-python-language-in-flink-TableAPI-td28061.html > > >> > > > > > > [18] > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-39-Flink-ML-pipeline-and-ML-libs-td28633.html > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > > > > >