Just looked at Greg's object juggle PR - looks good for inclusion in the
next release candidate.

Have not tested the web UI Router fix, but the code looks good.
Hi,

I have two bugfix pull requests in the stack.

[FLINK-3340] [runtime] Fix object juggling in drivers
  https://github.com/apache/flink/pull/1626

[FLINK-3437] [web-dashboard] Fix UI router state for job plan
  https://github.com/apache/flink/pull/1661

Greg

On Thu, Feb 25, 2016 at 8:32 AM, Robert Metzger <rmetz...@apache.org> wrote:

> Damn. I agree that this is a blocker.
> I use the maven-enforcer-plugin to check for the right maven, but the
build
> profile that runs the profile is only active during "deploy", not when
> packaging the binaries.
> That's why I didn't realize that I build the binaries with the wrong maven
> version.
>
> I suggest that we keep collecting problems until Friday afternoon (CET).
> Then I'll create the next release candidate.
>
> I'd also like to address this one:
> https://issues.apache.org/jira/browse/FLINK-3509
>
>
> On Thu, Feb 25, 2016 at 2:23 PM, Fabian Hueske <fhue...@gmail.com> wrote:
>
> > Hi folks,
> >
> > I think I found a release blocker.
> > The flink-dist JAR file contains non-relocated classes of Google Guava
> and
> > Apache HttpComponents.
> >
> > Fabian
> >
> > 2016-02-25 13:21 GMT+01:00 Chesnay Schepler <ches...@apache.org>:
> >
> > > tested the RC on Windows:
> > >
> > > - source compiles
> > > - some tests categorically fail: see FLINK-3491 / FLINK-3496
> > > - start/stop scripts work in both cygwin and windows CMD
> > > - ran several examples from batch/streaming/python
> > > - scripts also work on paths containing spaces
> > >
> > >
> > > On 25.02.2016 12:41, Robert Metzger wrote:
> > >
> > >> (I'm removing user@ from the discussion)
> > >>
> > >> Thank you for bringing the pull request to my attention Marton. I
have
> > to
> > >> admit that I didn't announce this RC properly in advance. In the RC0
> > >> thread
> > >> I said "early next week" and now its Thursday. I should have said
> > >> something
> > >> in that thread.
> > >> The "trigger" for creating the release was that the number of
blocking
> > >> issues is 0 now.
> > >>
> > >> I did a quick check of the open pull requests yesterday evening and
> > found
> > >> one [1] to be included into the RC as well. Since the PR you
mentioned
> > is
> > >> marked with [WIP] I thought its not yet ready to be merged.
> > >>
> > >> I would like to find a solution that works for everyone here: I would
> > like
> > >> to avoid delaying the release until tomorrow evening, and also the
> work
> > it
> > >> incurs for me create a release candidate.
> > >> How about the following: We keep this vote open, test and check the
> > >> release
> > >> and you merge the change to master in the meantime.
> > >> Most likely, the release gets cancelled anyways because we find
> > something
> > >> and then the next RC will contain your change.
> > >>
> > >> [1] https://github.com/apache/flink/pull/1706
> > >>
> > >> On Thu, Feb 25, 2016 at 12:11 PM, Márton Balassi <
> > >> balassi.mar...@gmail.com>
> > >> wrote:
> > >>
> > >> Thanks for creating the candidate Robert and for the heads-up, Slim.
> > >>>
> > >>> I would like to get a PR [1] in before 1.0.0 as it breaks hashing
> > >>> behavior
> > >>> of DataStream.keyBy. The PR has the feature implemented and the java
> > >>> tests
> > >>> adopted, there is still a bit of outstanding fix for the scala
tests.
> > >>> Gábor
> > >>> Horváth or myself will finish it by tomorrow evening.
> > >>>
> > >>> [1] https://github.com/apache/flink/pull/1685
> > >>>
> > >>> Best,
> > >>>
> > >>> Marton
> > >>>
> > >>> On Thu, Feb 25, 2016 at 12:04 PM, Slim Baltagi <sbalt...@gmail.com>
> > >>> wrote:
> > >>>
> > >>> Dear Flink community
> > >>>>
> > >>>> It is great news that the vote for the first release candidate
(RC1)
> > of
> > >>>> Apache Flink 1.0.0 is starting today February 25th, 2016!
> > >>>> As a community, we need to double our efforts and make sure that
> Flink
> > >>>> 1.0.0 is GA before these 2 upcoming major events:
> > >>>>
> > >>>>     -  Strata + Hadoop World in San Jose on *March 28-31, 2016*
> > >>>>     -  Hadoop Summit Europe in Dublin on *April 13-14, 2016*
> > >>>>
> > >>>> This is one aspect of the ‘market dynamics’ that we need to take
> into
> > >>>> account as a community.
> > >>>>
> > >>>> Good luck!
> > >>>>
> > >>>> Slim Baltagi
> > >>>>
> > >>>> On Feb 25, 2016, at 4:34 AM, Robert Metzger <rmetz...@apache.org>
> > >>>> wrote:
> > >>>>
> > >>>> Dear Flink community,
> > >>>>
> > >>>> Please vote on releasing the following candidate as Apache Flink
> > version
> > >>>> 1.0.0.
> > >>>>
> > >>>> I've set u...@flink.apache.org on CC because users are encouraged
> to
> > >>>> help testing Flink 1.0.0 for their specific use cases. Please
report
> > >>>> issues
> > >>>> (and successful tests!) on dev@flink.apache.org.
> > >>>>
> > >>>>
> > >>>> The commit to be voted on (
> > >>>> http://git-wip-us.apache.org/repos/asf/flink/commit/e4d308d6)
> > >>>> e4d308d64057e5f94bec8bbca8f67aab0ea78faa
> > >>>>
> > >>>> Branch:
> > >>>> release-1.0.0-rc1 (see
> > >>>>
> > >>>>
> >
>
https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc1
> > >>>> )
> > >>>>
> > >>>> The release artifacts to be voted on can be found at:
> > >>>> http://people.apache.org/~rmetzger/flink-1.0.0-rc1/
> > >>>>
> > >>>> The release artifacts are signed with the key with fingerprint
> > D9839159:
> > >>>> http://www.apache.org/dist/flink/KEYS
> > >>>>
> > >>>> The staging repository for this release can be found at:
> > >>>>
> > https://repository.apache.org/content/repositories/orgapacheflink-1063
> > >>>>
> > >>>> -------------------------------------------------------------
> > >>>>
> > >>>> The vote is open until Tuesday and passes if a majority of at least
> > >>>> three
> > >>>> +1 PMC votes are cast.
> > >>>>
> > >>>> The vote ends on Tuesday, March 1, 12:00 CET.
> > >>>>
> > >>>> [ ] +1 Release this package as Apache Flink 1.0.0
> > >>>> [ ] -1 Do not release this package because ...
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >
> >
>

Reply via email to