Hi Kurt,

I agree that this is a serious bug. However, I would not block the release because of this. As you said, there is a workaround and the `execute()` works in the most common case of a single execution. We can fix this in a minor release shortly after.

What do others think?

Regards,
Timo


Am 15.08.19 um 11:23 schrieb Kurt Young:
HI,

We just find a serious bug around blink planner:
https://issues.apache.org/jira/browse/FLINK-13708
When user reused the table environment instance, and call `execute` method
multiple times for
different sql, the later call will trigger the earlier ones to be
re-executed.

It's a serious bug but seems we also have a work around, which is never
reuse the table environment
object. I'm not sure if we should treat this one as blocker issue of 1.9.0.

What's your opinion?

Best,
Kurt


On Thu, Aug 15, 2019 at 2:01 PM Gary Yao <g...@ververica.com> wrote:

+1 (non-binding)

Jepsen test suite passed 10 times consecutively

On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek <aljos...@apache.org>
wrote:

+1

I did some testing on a Google Cloud Dataproc cluster (it gives you a
managed YARN and Google Cloud Storage (GCS)):
   - tried both YARN session mode and YARN per-job mode, also using
bin/flink list/cancel/etc. against a YARN session cluster
   - ran examples that write to GCS, both with the native Hadoop
FileSystem
and a custom “plugin” FileSystem
   - ran stateful streaming jobs that use GCS as a checkpoint backend
   - tried running SQL programs on YARN using the SQL Cli: this worked for
YARN session mode but not for YARN per-job mode. Looking at the code I
don’t think per-job mode would work from seeing how it is implemented.
But
I think it’s an OK restriction to have for now
   - in all the testing I had fine-grained recovery (region failover)
enabled but I didn’t simulate any failures

On 14. Aug 2019, at 15:20, Kurt Young <ykt...@gmail.com> wrote:

Hi,

Thanks for preparing this release candidate. I have verified the
following:
- verified the checksums and GPG files match the corresponding release
files
- verified that the source archives do not contains any binaries
- build the source release with Scala 2.11 successfully.
- ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
[FLINK-13688],
but
both are not release blockers. Other than that, all tests are passed.
- ran all e2e tests which don't need download external packages (it's
very
unstable
in China and almost impossible to download them), all passed.
- started local cluster, ran some examples. Met a small website display
issue
[FLINK-13591], which is also not a release blocker.

Although we have pushed some fixes around blink planner and hive
integration
after RC2, but consider these are both preview features, I'm lean to be
ok
to release
without these fixes.

+1 from my side. (binding)

Best,
Kurt


On Wed, Aug 14, 2019 at 5:13 PM Jark Wu <imj...@gmail.com> wrote:

Hi Gordon,

I have verified the following things:

- build the source release with Scala 2.12 and Scala 2.11 successfully
- checked/verified signatures and hashes
- checked that all POM files point to the same version
- ran some flink table related end-to-end tests locally and succeeded
(except TPC-H e2e failed which is reported in FLINK-13704)
- started cluster for both Scala 2.11 and 2.12, ran examples, verified
web
ui and log output, nothing unexpected
- started cluster, ran a SQL query to temporal join with kafka source
and
mysql jdbc table, and write results to kafka again. Using DDL to
create
the
source and sinks. looks good.
- reviewed the release PR

As FLINK-13704 is not recognized as blocker issue, so +1 from my side
(non-binding).

On Tue, 13 Aug 2019 at 17:07, Till Rohrmann <trohrm...@apache.org>
wrote:
Hi Richard,

although I can see that it would be handy for users who have PubSub
set
up,
I would rather not include examples which require an external
dependency
into the Flink distribution. I think examples should be
self-contained.
My
concern is that we would bloat the distribution for many users at the
benefit of a few. Instead, I think it would be better to make these
examples available differently, maybe through Flink's ecosystem
website
or
maybe a new examples section in Flink's documentation.

Cheers,
Till

On Tue, Aug 13, 2019 at 9:43 AM Jark Wu <imj...@gmail.com> wrote:

Hi Till,

After thinking about we can use VARCHAR as an alternative of
timestamp/time/date.
I'm fine with not recognize it as a blocker issue.
We can fix it into 1.9.1.


Thanks,
Jark


On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder <rich...@xeli.eu>
wrote:
Hello all,

I noticed the PubSub example jar is not included in the examples/
dir
of
flink-dist. I've created
https://issues.apache.org/jira/browse/FLINK-13700
+ https://github.com/apache/flink/pull/9424/files to fix this.

I will leave it up to you to decide if we want to add this to
1.9.0.
Regards,

Richard

On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann <
trohrm...@apache.org>
wrote:

Hi Jark,

thanks for reporting this issue. Could this be a documented
limitation
of
Blink's preview version? I think we have agreed that the Blink SQL
planner
will be rather a preview feature than production ready. Hence it
could
still contain some bugs. My concern is that there might be still
other
issues which we'll discover bit by bit and could postpone the
release
even
further if we say Blink bugs are blockers.

Cheers,
Till

On Tue, Aug 13, 2019 at 7:42 AM Jark Wu <imj...@gmail.com> wrote:

Hi all,

I just find an issue when testing connector DDLs against blink
planner
for
rc2.
This issue lead to the DDL doesn't work when containing
timestamp/date/time
type.
I have created an issue FLINK-13699[1] and a pull request for
this.
IMO, this can be a blocker issue of 1.9 release. Because
timestamp/date/time are primitive types, and this will break the
DDL
feature.
However, I want to hear more thoughts from the community whether
we
should
recognize it as a blocker.

Thanks,
Jark


[1]: https://issues.apache.org/jira/browse/FLINK-13699



On Mon, 12 Aug 2019 at 22:46, Becket Qin <becket....@gmail.com>
wrote:
Thanks Gordon, will do that.

On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai <
tzuli...@apache.org
wrote:

Concerning FLINK-13231:

Since this is a @PublicEvolving interface, technically it is
ok
to
break
it across releases (including across bugfix releases?).
So, @Becket if you do merge it now, please mark the fix
version
as
1.9.1.
During the voting process, in the case a new RC is created,
we
usually
check the list of changes compared to the previous RC, and
correct
the
"Fix
Version" of the corresponding JIRAs to be the right version
(in
the
case,
it would be corrected to 1.9.0 instead of 1.9.1).

On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann <
trohrm...@apache.org>
wrote:

I agree that it would be nicer. Not sure whether we should
cancel
the
RC
for this issue given that it is open for quite some time and
hasn't
been
addressed until very recently. Maybe we could include it on
the
shortlist
of nice-to-do things which we do in case that the RC gets
cancelled.
Cheers,
Till

On Mon, Aug 12, 2019 at 4:18 PM Becket Qin <
becket....@gmail.com>
wrote:
Hi Till,

Yes, I think we have already documented in that way. So
technically
speaking it is fine to change it later. It is just better
if
we
could
avoid
doing that.

Thanks,

Jiangjie (Becket) Qin

On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann <
trohrm...@apache.org>
wrote:

Could we say that the PubSub connector is public evolving
instead?
Cheers,
Till

On Mon, Aug 12, 2019 at 3:18 PM Becket Qin <
becket....@gmail.com
wrote:
Hi all,

FLINK-13231(palindrome!) has a minor Google PubSub
connector
API
change
regarding how to config rate limiting. The GCP PubSub
connector
is
a
newly
introduced connector in 1.9, so it would be nice to
include
this
change
into 1.9 rather than later to avoid a public API
change.
I
am
thinking of
making this as a blocker for 1.9. Want to check what do
others
think.
Thanks,

Jiangjie (Becket) Qin

On Mon, Aug 12, 2019 at 2:04 PM Zili Chen <
wander4...@gmail.com>
wrote:
Hi Kurt,

Thanks for your explanation. For [1] I think at least
we
should
change
the JIRA issue field, like unset the fixed version.
For
[2] I
can
see
the change is all in test scope but wonder if such a
commit
still
invalid
the release candidate. IIRC previous RC VOTE threads
would
contain
a
release manual/guide, I will try to look up it, too.

Best,
tison.


Kurt Young <ykt...@gmail.com> 于2019年8月12日周一
下午5:42写道:
Hi Zili,

Thanks for the heads up. The 2 issues you mentioned
were
opened
by
me.
We
have
found the reason of the second issue and a PR was
opened
for
it.
As
said
in
jira, the
issue was just a testing problem, should not be
blocker
of
1.9.0
release.
However,
we will still merge it into 1.9 branch.

Best,
Kurt


On Mon, Aug 12, 2019 at 5:38 PM Zili Chen <
wander4...@gmail.com>
wrote:
Hi,

I just noticed that a few hours ago there were
two
new
issues
filed and marked as blockers to 1.9.0[1][2].

Now [1] is closed as duplication but still marked
as
a blocker to 1.9.0, while [2] is downgrade to
"Major"
priority
but still target to be fixed in 1.9.0.

It would be worth to have attention of our
release
manager
at
least.
Best,
tison.

[1]
https://issues.apache.org/jira/browse/FLINK-13687
[2]
https://issues.apache.org/jira/browse/FLINK-13688


Gyula Fóra <gyula.f...@gmail.com> 于2019年8月12日周一
下午5:10写道:
Thanks Stephan :)
That looks easy enough, will try!

Gyula

On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen <
se...@apache.org>
wrote:
Hi Gyula!

Thanks for reporting this.

Can you try to simply build Flink without
Hadoop
and
then
exporting
HADOOP_CLASSPATH to your CloudEra libs?
That is the recommended way these days.

Best,
Stephan



On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra <
gyula.f...@gmail.com>
wrote:
Thanks Dawid,

In the meantime I also figured out that I
need
to
build
the
https://github.com/apache/flink-shaded
project
locally
with
-Dhadoop.version set to the specific hadoop
version
if
I
want
something
different.

Cheers,
Gyula

On Mon, Aug 12, 2019 at 9:54 AM Dawid
Wysakowicz
<
dwysakow...@apache.org
wrote:

Hi Gyula,

As for the issues with mapr maven
repository,
you
might
have
a
look
at
this message:


https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
Try using the "unsafe-mapr-repo" profile.

Best,

Dawid

On 11/08/2019 19:31, Gyula Fóra wrote:
Hi again,

How do I build the RC locally with the
hadoop
version
specified?
Seems
like
no matter what I do I run into
dependency
problems
with
the
shaded
hadoop
dependencies.
This seems to have worked in the past.

There might be some documentation
somewhere
that
I
couldnt
find,
so I
would
appreciate any pointers :)

Thanks!
Gyula

On Sun, Aug 11, 2019 at 6:57 PM Gyula
Fóra
<
gyula.f...@gmail.com
wrote:
Hi!

I am trying to build 1.9.0-rc2 with
the
-Pvendor-repos
profile
enabled.
I
get the following error:

mvn clean install -DskipTests
-Pvendor-repos
-Dhadoop.version=2.6.0
-Pinclude-hadoop (ignore that the
hadoop
version
is
not
a
vendor
hadoop
version)

[ERROR] Failed to execute goal on
project
flink-hadoop-fs:
Could
not
resolve dependencies for project

org.apache.flink:flink-hadoop-fs:jar:1.9.0:
Failed
to
collect
dependencies
at
org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0:
Failed
to
read
artifact descriptor for

org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0:
Could
not
transfer
artifact
org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-7.0
from/to
mapr-releases (
https://repository.mapr.com/maven/
):
sun.security.validator.ValidatorException:
PKIX
path
building
failed:
sun.security.provider.certpath.SunCertPathBuilderException:
unable
to
find
valid certification path to requested
target
->
[Help 1]
This looks like a TLS error. Might not
be
related
to the
release
but
it
could be good to know.

Cheers,
Gyula

On Fri, Aug 9, 2019 at 6:26 PM Tzu-Li
(Gordon)
Tai <
tzuli...@apache.org
wrote:

Please note that the unresolved
issues
that
are
still
tagged
with a
fix
version "1.9.0", as seen in the JIRA
release
notes
[1],
are
issues
to
update documents for new features.
I've left them still associated with
1.9.0
since
these
should
still
be
updated for 1.9.0 soon along with the
official
release.
[1]


https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601
On Fri, Aug 9, 2019 at 6:17 PM Tzu-Li
(Gordon)
Tai
<
tzuli...@apache.org>
wrote:

Hi all,

Release candidate #2 for Apache
Flink
1.9.0
is
now
ready
for
your
review.
This is the first voting candidate
for
1.9.0,
following
the
preview
candidates RC0 and RC1.

Please review and vote on release
candidate
#2
for
version
1.9.0,
as
follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release
(please
provide
specific
comments)
The complete staging area is
available
for
your
review,
which
includes:
* JIRA release notes [1],
* the official Apache source release
and
binary
convenience
releases
to
be
deployed to dist.apache.org [2],
which
are
signed
with
the
key
with
fingerprint
1C1E2394D3194E1944613488F320986D35C33D6A
[3],
* all artifacts to be deployed to
the
Maven
Central
Repository
[4],
* source code tag
“release-1.9.0-rc2”
[5].
Robert is also preparing a pull
request
for
the
announcement
blog
post
in
the works, and will update this
voting
thread
with a
link
to
the
pull
request shortly afterwards.

The vote will be open for *at least
72
hours*.
Please cast your votes before *Aug.
14th
(Wed.)
2019,
17:00
PM
CET*.It
is
adopted by majority approval, with
at
least
3
PMC
affirmative
votes.
Thanks,
Gordon[1]

https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601
[2]
https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc2/
[3]
https://dist.apache.org/repos/dist/release/flink/KEYS
[4]
https://repository.apache.org/content/repositories/orgapacheflink-1234
[5]

https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc2



Reply via email to