[jira] [Created] (FLINK-10386) Remove legacy class TaskExecutionStateListener

2018-09-20 Thread JIRA
陈梓立 created FLINK-10386:
---

 Summary: Remove legacy class TaskExecutionStateListener
 Key: FLINK-10386
 URL: https://issues.apache.org/jira/browse/FLINK-10386
 Project: Flink
  Issue Type: Improvement
  Components: TaskManager
Affects Versions: 1.7.0
Reporter: 陈梓立
Assignee: 陈梓立
 Fix For: 1.7.0


After a discussion 
[here|https://github.com/apache/flink/commit/0735b5b935b0c0757943e2d58047afcfb9949560#commitcomment-30584257]
 with [~trohrm...@apache.org]. I start to analyze the usage of 
{{ActorGatewayTaskExecutionStateListener}} and {{TaskExecutionStateListener}}.

In conclusion, we abort {{TaskExecutionStateListener}} strategy and no any 
component rely on it. Instead, we introduce {{TaskManagerActions}} to take the 
role for the communication of {{Task}} with {{TaskManager}}. No one except 
{{TaskManager}} should directly communicate with {{Task}}. So it can be safely 
remove legacy class {{TaskExecutionStateListener}}.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10385) Implement a waitUntilCondition utils

2018-09-20 Thread JIRA
陈梓立 created FLINK-10385:
---

 Summary: Implement a waitUntilCondition utils
 Key: FLINK-10385
 URL: https://issues.apache.org/jira/browse/FLINK-10385
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.7.0
Reporter: 陈梓立
 Fix For: 1.7.0


Recently when I refine some tests, I notice that it is a common requirement to 
wait until a (stable) condition occur.

To achieve this, we have {{ExecutionGraphTestUtils#waitUntilJobStatus}} and 
many. Most of them can simply abstract as
{code:java}
public static void waitUntilCondition(SupplierWithException 
conditionSupplier, Deadline deadline) {
  while (deadline.hasTimeLeft()) {
if (conditionSupplier.get()) { return; }
Thread.sleep(Math.min(deadline.toMillis(), 500);
  }
  throws new IlleagalStateException("...");
}
{code}
 

I propose to implement such a method to avoid too many utils method scattered 
to achieve the same purpose.
 Looking forward to your advice. If there is previous code/project already 
implemented this, I am glad to introduce it.

cc [~Zentol]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10384) Add Sinh math function supported in Table API and SQL

2018-09-20 Thread vinoyang (JIRA)
vinoyang created FLINK-10384:


 Summary: Add Sinh math function supported in Table API and SQL
 Key: FLINK-10384
 URL: https://issues.apache.org/jira/browse/FLINK-10384
 Project: Flink
  Issue Type: Sub-task
  Components: Table API  SQL
Reporter: vinoyang
Assignee: vinoyang


like FLINK-10340



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] [Contributing] (3) - Review Tooling

2018-09-20 Thread Stephan Ewen
 Hi all!

This thread is dedicated to discuss the tooling we want to use for the
reviews.
It is spun out of the proposal *"A more structured approach to reviews and
contributions".*


*Suggestions brought up so far*


*Use comments / template with checklist*

  - Easy to do
  - Manual, a bit of reviewer overhead, reviewers needs to know the process

*Use a bot *

  - Automatically add the review questions to each new PR
  - Further details?

*Use GitHub labels*

  - Searchable
  - possibly not obvious to new contributors
  - Any restrictions? Do members need to apply at ASF infra to have
permissions to edit github labels?


[DISCUSS] [Contributing] (2) - Review Steps

2018-09-20 Thread Stephan Ewen
Hi all!

This thread is dedicated to discuss the specific review steps and answers
we want to have during reviews.
It is spun out of the proposal *"A more structured approach to reviews and
contributions".*

Please keep this thread focused on the review steps, NOT on the tooling
(bot, comment/template, labels, ...). There will be a separate thread for
that.


*Discussion do far*

There seems to be almost consensus in the basic approach, with open
questions about details as outlined below.


*(1) Do we agree on the five basic steps below?*

  - Do we want to make "(3) Is the contribution described well" the first
item?


*(2) How do we understand that consensus is reached about adding the
feature?*

  - When one committer +1s the question and no other person voices
concerns, is this consensus? (classical lazy consensus)

*(3) To answer the question whether a PR needs special attention*

  - Also tagged by the committers that drive the "should this be added"
consensus
  - Should we create a wiki page of "component experts"?




*Original Review Guide Proposal*

https://docs.google.com/document/d/1yaX2b9LNh-6LxrAmE23U3D2cRbocGlGKCYnvJd9l
Vhk/edit?usp=sharing




















































*How to Review Contributions--This guide is for
all committers and contributors that want to help with reviewing
contributions. Thank you for your effort - good reviews are one the most
important and crucial parts of an open source project. This guide should
help the community to make reviews such that: - Contributors have a good
contribution experience- Reviews are structured and check all important
aspects of a contribution- Make sure we keep a high code quality in Flink-
We avoid situations where contributors and reviewers spend a lot of time to
refine a contribution that gets rejected laterReview ChecklistEvery review
needs to check the following five aspects. We encourage to check these
aspects in order, to avoid spending time on detailed code quality reviews
when there is not yet consensus that a feature or change should be actually
be added.(1) Is there consensus whether the change of feature should go
into to Flink?For bug fixes, this needs to be checked only in case it
requires bigger changes or might break existing programs and
setups.Ideally, this question is already answered from a JIRA issue or the
dev-list discussion, except in cases of bug fixes and small lightweight
additions/extensions. In that case, this question can be immediately marked
as resolved. For pull requests that are created without prior consensus,
this question needs to be answered as part of the review.The decision
whether the change should go into Flink needs to take the following aspects
into consideration: - Does the contribution alter the behavior of features
or components in a way that it may break previous users’ programs and
setups? If yes, there needs to be a discussion and agreement that this
change is desirable. - Does the contribution conceptually fit well into
Flink? Is it too much of special case such that it makes things more
complicated for the common case, or bloats the abstractions / APIs? - Does
the feature fit well into Flink’s architecture? Will it scale and keep
Flink flexible for the future, or will the feature restrict Flink in the
future? - Is the feature a significant new addition (rather than an
improvement to an existing part)? If yes, will the Flink community commit
to maintaining this feature? - Does the feature produce added value for
Flink users or developers? Or does it introduce risk of regression without
adding relevant user or developer benefit?All of these questions should be
answerable from the description/discussion in JIRA and Pull Request,
without looking at the code.(2) Does the contribution need attention from
some specific committers and is there time commitment from these
committers?Some changes require attention and approval from specific
committers. For example, changes in parts that are either very performance
sensitive, or have a critical impact on distributed coordination and fault
tolerance need input by a committer that is deeply familiar with the
component.As a rule of thumb, this is the case when the Pull Request
description answers one of the questions in the template section “Does this
pull request potentially affect one of the following parts” with ‘yes’.This
question can be answered with - Does not need specific attention- Needs
specific attention for X (X can be for example checkpointing, jobmanager,
etc.).- Has specific attention for X by @commiterA, @contributorBIf the
pull request needs specific attention, one of the tagged
committers/contributors should give the final approval.(3) Is the
contribution described well?Check whether the contribution is sufficiently
well described to support a good review. Trivial changes and fixes do not
need a long description. Any pull request that changes functionality or
behavior needs to describe 

Re: [PROPOSAL] [community] A more structured approach to reviews and contributions

2018-09-20 Thread Stephan Ewen
Hi all!

Thank you for the interest and lively discussion.

I think we are starting to mix different things into one discussion, so I
would like to close this thread and start three separate dedicated threads:

(1) Contributing Guide and Pull Request Template

(2) Review Process (steps, questions, checklists only, no tools like
bots/labels/templates)

(3) Review Tools (whether to drive the review process with labels,
comments/templates, bots, etc.)

Will post these threads asap.

Best,
Stephan


On Thu, Sep 20, 2018 at 3:49 PM, 陈梓立  wrote:

> Hi Fabian,
>
> I see a bot "project-bot" active on pull requests. It is some progress of
> this thread?
>
> Best,
> tison.
>
>
> Thomas Weise  于2018年9月19日周三 下午10:02写道:
>
> > Follow-up regarding the PR template that pops up when opening a PR:
> >
> > I think what we have now is a fairly big blob of text that jumps up a bit
> > unexpectedly for a first time contributor and is also cumbersome to deal
> > with in the small PR description window. Perhaps we can improve it a bit:
> >
> > * Instead of putting all that text into the description, add it to
> > website/wiki and just have a pointer in the PR, asking the contributor to
> > review the guidelines before opening a PR.
> > * If the questions further down can be made relevant to the context of
> the
> > contribution, that would probably help both the contributor and the
> > reviewer. For example, the questions would be different for a
> documentation
> > change, connector change or work deep in core. Not sure if that can be
> > automated, but if moved to a separate page, it could be structured
> better.
> >
> > Thanks,
> > Thomas
> >
> >
> >
> >
> >
> >
> > On Tue, Sep 18, 2018 at 8:13 AM 陈梓立  wrote:
> >
> > > Put some good cases here might be helpful.
> > >
> > > See how a contribution of runtime module be proposed, discussed,
> > > implemented and merged from  https://github.com/apache/flink/pull/5931
> > to
> > > https://github.com/apache/flink/pull/6132.
> > >
> > > 1. #5931 fix a bug, but remains points could be improved. Here
> sihuazhou
> > > and shuai-xu share their considerations and require review(of the
> > proposal)
> > > by Stephan, Till and Gary, our committers.
> > > 2. After discussion, all people involved reach a consensus. So the rest
> > > work is to implement it.
> > > 3. sihuazhou gives out an implementation #6132, Till reviews it and
> find
> > it
> > > is somewhat out of the "architectural" aspect, so suggests
> > > implementation-level changes.
> > > 4. Addressing those implementation-level comments, the PR gets merged.
> > >
> > > I think this is quite a good example how we think our review process
> > should
> > > go.
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > 陈梓立  于2018年9月18日周二 下午10:53写道:
> > >
> > > > Maybe a little rearrange to the process would help.
> > > >
> > > > (1). Does the contributor describe itself well?
> > > >   (1.1) By whom this contribution should be given attention. This
> often
> > > > shows by its title, "[FLINK-XXX] [module]", the module part infer.
> > > >   (1.2) What the purpose of this contribution is. Done by the PR
> > > template.
> > > > Even on JIRA an issue should cover these points.
> > > >
> > > > (2). Is there consensus on the contribution?
> > > > This follows (1), because we need to clear what the purpose of the
> > > > contribution first. At this stage reviewers could cc to module
> > maintainer
> > > > as a supplement to (1.1). Also reviewers might ask the contributor to
> > > > clarify his purpose to sharp(1.2)
> > > >
> > > > (3). Is the implement architectural and fit code style?
> > > > This follows (2). And only after a consensus we talk about concrete
> > > > implement, which prevent spend time and put effort in vain.
> > > >
> > > > In addition, ideally a "+1" comment or approval means the purpose of
> > > > contribution is supported by the reviewer and implement(if there is)
> > > > quality is fine, so the reviewer vote for a consensus.
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > >
> > > > Stephan Ewen  于2018年9月18日周二 下午6:44写道:
> > > >
> > > >> On the template discussion, some thoughts
> > > >>
> > > >> *PR Template*
> > > >>
> > > >> I think the PR template went well. We can rethink the "checklist" at
> > the
> > > >> bottom, but all other parts turned out helpful in my opinion.
> > > >>
> > > >> With the amount of contributions, it helps to ask the contributor to
> > > take
> > > >> a
> > > >> little more work in order for the reviewer to be more efficient.
> > > >> I would suggest to keep that mindset: Whenever we find a way that
> the
> > > >> contributor can prepare stuff in such a way that reviews become
> > > >> more efficient, we should do that. In my experience, most
> contributors
> > > are
> > > >> willing to put in some extra minutes if it helps that their
> > > >> PR gets merged faster.
> > > >>
> > > >> *Review Template*
> > > >>
> > > >> I think it would be helpful to have this checklist. It does not
> matter
> > > in
> 

[DISCUSS] [Contributing] (1) - Pull Request Template

2018-09-20 Thread Stephan Ewen
Hi all!

This thread is dedicated to discuss improvements to the way we encourage
contributions. It is spun out of the proposal *"A more structured approach
to reviews and contributions".*

Please keep this thread focused on the contribution side, there are
separate discussions about the reviews!


*Points that were brought up about contributions so far*


*Adjust the contribution guide to not encourage "single commit" pull
requests*

Instead, factor out unrelated changes, cleanup, preparatory refactoring
into individual commits for easier reviews.

  - I personally fully subscribe to this, it is really helpful to the
review process and not really much extra work.


*Simplify the Pull Request template. The template is overwhelming and hard
to work with.*

  - Previous suggestion: remove the introductory text and reference the
contribution guide website
  - We had that before, there was a significant number of new contributors
that did not check out the contribution guide, hence we added a condensed
form of the contribution guide in the template so that everyone is sure to
see it. It creates mild overhead for frequent contributors, but PRs are
better described since.

  - We could think to remove the introduction text, and could push back on
badly described pull requests as a first thing during reviews

  - I personally found the "sensitivity checklist" (does the PR touch X or
Y) quite helpful, it implicitly brought it to the attention of contributors.
  - With a more principled review process, we could let this checklist be
part of the "needs extra attention" checks and remove it from the
contribution PR template.


*Putting work on reviewers versus contributors*

We have many more contributions than active and experiences reviewers.
Adding more reviews is needed, but there will probably always be more
contributors than reviewers.

As a consequence, it can make sense to ask the contributor for a little it
of extra work (like better description, better separation of changes into
commits) if that helps the reviewers to be more efficient and better handle
contributions. In my experience, most contributors understand that and are
willing to help do these things if it helps getting the contribution merged.

How do other people see this?

Best,
Stephan


[jira] [Created] (FLINK-10383) Hadoop configurations on the classpath seep into the S3 file system configs

2018-09-20 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-10383:


 Summary: Hadoop configurations on the classpath seep into the S3 
file system configs
 Key: FLINK-10383
 URL: https://issues.apache.org/jira/browse/FLINK-10383
 Project: Flink
  Issue Type: Bug
  Components: FileSystem
Affects Versions: 1.6.1
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.7.0, 1.6.2


The S3 connectors are based on a self-contained shaded Hadoop. By design, they 
should only use config value from the Flink configuration.

However, because Hadoop loads implicitly configs from the classpath, existing 
"core-site.xml" files can interfere with the configuration in ways 
intransparent for the user. We should ensure such configs are not loaded.






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10382) Writer has already been opened while using AvroKeyValueSinkWriter and BucketingSink

2018-09-20 Thread Chengzhi Zhao (JIRA)
Chengzhi Zhao created FLINK-10382:
-

 Summary: Writer has already been opened while using 
AvroKeyValueSinkWriter and BucketingSink
 Key: FLINK-10382
 URL: https://issues.apache.org/jira/browse/FLINK-10382
 Project: Flink
  Issue Type: Bug
  Components: Streaming Connectors
Affects Versions: 1.6.0, 1.5.0
Reporter: Chengzhi Zhao


I am using flink 1.6.0 and I am using AvroKeyValueSinkWriter and BucketingSink 
to S3.
 
After the application running for a while ~ 20 mins, I got an *exception: 
java.lang.IllegalStateException: Writer has already been opened*
{code:java}
2018-09-17 15:40:23,771 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering 
checkpoint 4 @ 1537198823640 for job 8f9ab122fb7452714465eb1e1989e4d7.
2018-09-17 15:41:27,805 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (2/16) 
(25914cb3f77c8e4271b0fb6ea597ed50) switched from RUNNING to FAILED.
java.lang.IllegalStateException: Writer has already been opened
at 
org.apache.flink.streaming.connectors.fs.StreamWriterBase.open(StreamWriterBase.java:68)
at 
org.apache.flink.streaming.connectors.fs.AvroKeyValueSinkWriter.open(AvroKeyValueSinkWriter.java:150)
at 
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.openNewPartFile(BucketingSink.java:583)
at 
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.invoke(BucketingSink.java:458)
at 
org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:104)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
at java.lang.Thread.run(Thread.java:748)
2018-09-17 15:41:27,808 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job Stream to Stream 
Join (8f9ab122fb7452714465eb1e1989e4d7) switched from state RUNNING to FAILING.
java.lang.IllegalStateException: Writer has already been opened
at 
org.apache.flink.streaming.connectors.fs.StreamWriterBase.open(StreamWriterBase.java:68)
at 
org.apache.flink.streaming.connectors.fs.AvroKeyValueSinkWriter.open(AvroKeyValueSinkWriter.java:150)
at 
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.openNewPartFile(BucketingSink.java:583)
at 
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.invoke(BucketingSink.java:458)
at 
org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:104)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
at java.lang.Thread.run(Thread.java:748)
{code}
After checking the code, I think the issue might be related to 
AvroKeyValueSinkWriter.java and led to the writer has not been closed 
completely. I also noticed this change and affect 1.5+ 
[https://github.com/apache/flink/commit/915213c7afaf3f9d04c240f43d88710280d844e3#diff-86c35c993fdb0c482544951b376e5ea6]
I created my own AvroKeyValueSinkWriter class and implement the code similar as 
v1.4, it seems running fine now. 
{code:java}
@Override
public void close() throws IOException {
try {
super.close();
} finally {
if (keyValueWriter != null) {
keyValueWriter.close();
}
}
}

{code}
I am curious if anyone had the similar issue, Appreciate anyone has insights on 
it. Thanks! 
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10381) concurrent submit job get ProgramAbortException

2018-09-20 Thread Youjun Yuan (JIRA)
Youjun Yuan created FLINK-10381:
---

 Summary: concurrent submit job get ProgramAbortException
 Key: FLINK-10381
 URL: https://issues.apache.org/jira/browse/FLINK-10381
 Project: Flink
  Issue Type: Bug
  Components: JobManager
Affects Versions: 1.6.0, 1.5.1, 1.4.0
 Environment: Flink 1.4.0, standardalone cluster.
Reporter: Youjun Yuan
 Fix For: 1.7.0
 Attachments: image-2018-09-20-22-40-31-846.png

if submit multiple jobs concurrently, some the them are likely to fail, and 
return following exception: 

_java.util.concurrent.CompletionException: 
org.apache.flink.util.FlinkException: Could not run the jar._ 
_at 
org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleJsonRequest$0(JarRunHandler.java:90)_
 
_at 
org.apache.flink.runtime.webmonitor.handlers.JarRunHandler$$Lambda$47/713642705.get(Unknown
 Source)_ 
_at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1582)_
 
_at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)_ 
_at java.util.concurrent.FutureTask.run(FutureTask.java:266)_ 
_at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)_
 
_at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)_
 
_at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)_
 
_at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)_
 
_at java.lang.Thread.run(Thread.java:745)_
_Caused by: org.apache.flink.util.FlinkException: Could not run the jar. ... 10 
more_

_Caused by: org.apache.flink.client.program.ProgramInvocationException: The 
program caused an error:_ 
_at 
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:93)_
 
_at 
org.apache.flink.client.program.ClusterClient.getOptimizedPlan(ClusterClient.java:334)_
 
_at 
org.apache.flink.runtime.webmonitor.handlers.JarActionHandler.getJobGraphAndClassLoader(JarActionHandler.java:76)_
 
_at 
org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleJsonRequest$0(JarRunHandler.java:69)
 ... 9 more_

_Caused by: 
org.apache.flink.client.program.OptimizerPlanEnvironment$ProgramAbortException_ 
_at 
org.apache.flink.streaming.api.environment.StreamPlanEnvironment.execute(StreamPlanEnvironment.java:72)_
 
_..._ 
_at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)_ 
_at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)_ 
_at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)_
 
_at java.lang.reflect.Method.invoke(Method.java:497)_ 
_at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:525)_
 
_at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:417)_
 
_at 
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83_

 
h2. Possible Cause:

in OptimizerPlanEnvironment.getOptimizerPlan(), setAsContext() will set a 
static variable named contextEnvironmentFactory in ExecutionEnviroment, which 
will eventually cause ExecutionEnviroment.getExecutionEnvironment() returns the 
currently OptimizerPlanEnvironment instance, and capture the optimizerPlan and 
save to a instance vairable in OptimizerPlanEnvironment.

However, if multiple jobs are submitted at the same time, the static variable 
contextEnvironmentFactory in ExecutionEnvironment will be set again by a 
following job, hence force ExecutionEnviroment.getExecutionEnvironment() return 
another new instance of OptimizerPlanEnvironment, therefore, the first intance 
of OptimizerPlanEnvironment will not caputre the optimizerPlan, and throws 
ProgramInvocationException. The spot is copied below for you convience:

setAsContext();
 try {
 prog.invokeInteractiveModeForExecution();
 }
 catch (ProgramInvocationException e) {
 throw e;
 }
 catch (Throwable t) {
 // the invocation gets aborted with the preview plan
 if (optimizerPlan != null) {
 return optimizerPlan;
 } else {
 throw new ProgramInvocationException("The program caused an error: ", t);
 }
 }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[ANNOUNCE] Apache Flink 1.6.1 released

2018-09-20 Thread Till Rohrmann
The Apache Flink community is very happy to announce the release of Apache
Flink 1.6.1, which is the first bug fix release for the Apache Flink 1.6
series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements
for this bug fix release:
https://flink.apache.org/news/2018/09/20/release-1.6.1.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12343752

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Cheers,
Till


[ANNOUNCE] Apache Flink 1.5.4 released

2018-09-20 Thread Till Rohrmann
The Apache Flink community is very happy to announce the release of Apache
Flink 1.5.4, which is the fourth bug fix release for the Apache Flink 1.5
series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements
for this bug fix release:
https://flink.apache.org/news/2018/09/20/release-1.5.4.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12343899

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Cheers,
Till


[jira] [Created] (FLINK-10379) Can not use Table Functions in Java Table API

2018-09-20 Thread Piotr Nowojski (JIRA)
Piotr Nowojski created FLINK-10379:
--

 Summary: Can not use Table Functions in Java Table API
 Key: FLINK-10379
 URL: https://issues.apache.org/jira/browse/FLINK-10379
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Affects Versions: 1.6.1
Reporter: Piotr Nowojski


As stated in the 
[documentation|[https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/udfs.html#table-functions],]
 this is how table functions should be used in Java Table API:
{code:java}
// Register the function.
tableEnv.registerFunction("split", new Split("#"));

myTable.join("split(a) as (word, length)");
{code}
However {{Table.join(String)}} was removed sometime ago and now it is 
impossible to use Table Functions in Java Table API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10380) Check if key is not nul before assign to group in KeyedStream

2018-09-20 Thread Sayat Satybaldiyev (JIRA)
Sayat Satybaldiyev created FLINK-10380:
--

 Summary: Check if key is not nul before assign to group in 
KeyedStream
 Key: FLINK-10380
 URL: https://issues.apache.org/jira/browse/FLINK-10380
 Project: Flink
  Issue Type: Task
Affects Versions: 1.6.0
Reporter: Sayat Satybaldiyev


If a user creates a KeyedStream and partition by key which might be null, Flink 
job throws NullPointerExceptoin at runtime. However, NPE that Flink throws hard 
to debug and understand as it doesn't refer to place in Flink job.

*Suggestion:*

Add precondition that checks if the key is not null and throw a descriptive 
error if it's a null.

 

*Job Example*:

 
{code:java}
DataStream stream = env.fromCollection(Arrays.asList("aaa", "bbb"))
 .map(x -> (String)null)
 .keyBy(x -> x);{code}
 

 

An error that is thrown:

 
{code:java}
Exception in thread "main" 
org.apache.flink.runtime.client.JobExecutionException: 
java.lang.RuntimeException
 at 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:623)
 at 
org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:123)
 at org.myorg.quickstart.BuggyKeyedStream.main(BuggyKeyedStream.java:61)
Caused by: java.lang.RuntimeException
 at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:110)
 at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)16:26:43,110
 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC 
service.
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
 at 
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
 at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
 at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
 at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
 at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
 at 
org.apache.flink.runtime.state.KeyGroupRangeAssignment.assignToKeyGroup(KeyGroupRangeAssignment.java:59)
 at 
org.apache.flink.runtime.state.KeyGroupRangeAssignment.assignKeyToParallelOperator(KeyGroupRangeAssignment.java:48)
 at 
org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannels(KeyGroupStreamPartitioner.java:63)
 at 
org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannels(KeyGroupStreamPartitioner.java:32)
 at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:104)
 at 
org.apache.flink.streaming.runtime.io.StreamRecordWriter.emit(StreamRecordWriter.java:81)
 at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
{code}

... 10 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [PROPOSAL] [community] A more structured approach to reviews and contributions

2018-09-20 Thread 陈梓立
Hi Fabian,

I see a bot "project-bot" active on pull requests. It is some progress of
this thread?

Best,
tison.


Thomas Weise  于2018年9月19日周三 下午10:02写道:

> Follow-up regarding the PR template that pops up when opening a PR:
>
> I think what we have now is a fairly big blob of text that jumps up a bit
> unexpectedly for a first time contributor and is also cumbersome to deal
> with in the small PR description window. Perhaps we can improve it a bit:
>
> * Instead of putting all that text into the description, add it to
> website/wiki and just have a pointer in the PR, asking the contributor to
> review the guidelines before opening a PR.
> * If the questions further down can be made relevant to the context of the
> contribution, that would probably help both the contributor and the
> reviewer. For example, the questions would be different for a documentation
> change, connector change or work deep in core. Not sure if that can be
> automated, but if moved to a separate page, it could be structured better.
>
> Thanks,
> Thomas
>
>
>
>
>
>
> On Tue, Sep 18, 2018 at 8:13 AM 陈梓立  wrote:
>
> > Put some good cases here might be helpful.
> >
> > See how a contribution of runtime module be proposed, discussed,
> > implemented and merged from  https://github.com/apache/flink/pull/5931
> to
> > https://github.com/apache/flink/pull/6132.
> >
> > 1. #5931 fix a bug, but remains points could be improved. Here sihuazhou
> > and shuai-xu share their considerations and require review(of the
> proposal)
> > by Stephan, Till and Gary, our committers.
> > 2. After discussion, all people involved reach a consensus. So the rest
> > work is to implement it.
> > 3. sihuazhou gives out an implementation #6132, Till reviews it and find
> it
> > is somewhat out of the "architectural" aspect, so suggests
> > implementation-level changes.
> > 4. Addressing those implementation-level comments, the PR gets merged.
> >
> > I think this is quite a good example how we think our review process
> should
> > go.
> >
> > Best,
> > tison.
> >
> >
> > 陈梓立  于2018年9月18日周二 下午10:53写道:
> >
> > > Maybe a little rearrange to the process would help.
> > >
> > > (1). Does the contributor describe itself well?
> > >   (1.1) By whom this contribution should be given attention. This often
> > > shows by its title, "[FLINK-XXX] [module]", the module part infer.
> > >   (1.2) What the purpose of this contribution is. Done by the PR
> > template.
> > > Even on JIRA an issue should cover these points.
> > >
> > > (2). Is there consensus on the contribution?
> > > This follows (1), because we need to clear what the purpose of the
> > > contribution first. At this stage reviewers could cc to module
> maintainer
> > > as a supplement to (1.1). Also reviewers might ask the contributor to
> > > clarify his purpose to sharp(1.2)
> > >
> > > (3). Is the implement architectural and fit code style?
> > > This follows (2). And only after a consensus we talk about concrete
> > > implement, which prevent spend time and put effort in vain.
> > >
> > > In addition, ideally a "+1" comment or approval means the purpose of
> > > contribution is supported by the reviewer and implement(if there is)
> > > quality is fine, so the reviewer vote for a consensus.
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > Stephan Ewen  于2018年9月18日周二 下午6:44写道:
> > >
> > >> On the template discussion, some thoughts
> > >>
> > >> *PR Template*
> > >>
> > >> I think the PR template went well. We can rethink the "checklist" at
> the
> > >> bottom, but all other parts turned out helpful in my opinion.
> > >>
> > >> With the amount of contributions, it helps to ask the contributor to
> > take
> > >> a
> > >> little more work in order for the reviewer to be more efficient.
> > >> I would suggest to keep that mindset: Whenever we find a way that the
> > >> contributor can prepare stuff in such a way that reviews become
> > >> more efficient, we should do that. In my experience, most contributors
> > are
> > >> willing to put in some extra minutes if it helps that their
> > >> PR gets merged faster.
> > >>
> > >> *Review Template*
> > >>
> > >> I think it would be helpful to have this checklist. It does not matter
> > in
> > >> which form, be that as a text template, be that as labels.
> > >>
> > >> The most important thing is to make explicit which questions have been
> > >> answered in the review.
> > >> Currently there is a lot of "+1" on pull requests which means "code
> > >> quality
> > >> is fine", but all other questions are unanswered.
> > >> The contributors then rightfully wonder why this does not get merged.
> > >>
> > >>
> > >>
> > >> On Tue, Sep 18, 2018 at 7:26 AM, 陈梓立  wrote:
> > >>
> > >> > Hi all interested,
> > >> >
> > >> > Within the document there is a heated discussion about how the PR
> > >> > template/review template should be.
> > >> >
> > >> > Here share my opinion:
> > >> >
> > >> > 1. For the review template, actually we don't need comment a review
> > >> > template at all. 

[jira] [Created] (FLINK-10378) Hide/Comment out contribute guide from PULL_REQUEST_TEMPLATE

2018-09-20 Thread JIRA
陈梓立 created FLINK-10378:
---

 Summary: Hide/Comment out contribute guide from 
PULL_REQUEST_TEMPLATE
 Key: FLINK-10378
 URL: https://issues.apache.org/jira/browse/FLINK-10378
 Project: Flink
  Issue Type: Improvement
  Components: GitHub
Reporter: 陈梓立
Assignee: 陈梓立


Explicitly comment out contribute guide from PULL_REQUEST_TEMPLATE by .

This is a hint to contributor that such message is as information and would not 
appear at the final content, as a side effect also reduce the work the a 
contributor delete such text every time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [RESULT] [VOTE] Release 1.5.4, release candidate #1

2018-09-20 Thread Till Rohrmann
I forgot to add myself to the list of binding votes.

There are 5 approving votes, 4 of which are binding:
* Fabian Hueske (binding)
* Gary Yao (non-binding)
* Tzu-Li Tai (binding)
* Timo Walther (binding)
* Till Rohrmann (binding)

Cheers,
Till

On Thu, Sep 20, 2018 at 3:38 PM Till Rohrmann  wrote:

> I'm happy to announce that we have unanimously approved this release.
>
> There are 5 approving votes, 4 of which are binding:
> * Fabian Hueske (binding)
> * Gary Yao (non-binding)
> * Tzu-Li Tai (binding)
> * Timo Walther (binding)
>
> There are no disapproving votes.
>
> Thanks everyone!
>
> Cheers,
> Till
>


[RESULT] [VOTE] Release 1.6.1, release candidate #1

2018-09-20 Thread Till Rohrmann
I'm happy to announce that we have approved this release.

There are 6 approving votes, 4 of which are binding:
* Gary Yao (non-binding)
* Fabian Hueske (binding)
* Vino Yang (non-binding)
* Till Rohrmann (binding)
* Tzu-Li Tai (binding)
* Timo Walther (binding)

There is one disapproving vote:
* Shimin Yang (non-binding)

Thanks everyone!

Cheers,
Till


[RESULT] [VOTE] Release 1.5.4, release candidate #1

2018-09-20 Thread Till Rohrmann
I'm happy to announce that we have unanimously approved this release.

There are 5 approving votes, 4 of which are binding:
* Fabian Hueske (binding)
* Gary Yao (non-binding)
* Tzu-Li Tai (binding)
* Timo Walther (binding)

There are no disapproving votes.

Thanks everyone!

Cheers,
Till


Re: [VOTE] Release 1.5.4, release candidate #1

2018-09-20 Thread Till Rohrmann
Thanks everyone for testing and voting on this RC. Hereby I close the vote.
The result will be announced in a separate thread.

Cheers,
Till

On Thu, Sep 20, 2018 at 2:34 PM Till Rohrmann  wrote:

> +1 (binding)
>
> - Verified checksums and signatures
> - Ran all end-to-end tests
> - Executed Jepsen test suite (including test for standby JobManager)
> - Build Flink from sources
> - Verified that no new dependencies were added
>
> Cheers,
> Till
>
> On Thu, Sep 20, 2018 at 1:54 PM Timo Walther  wrote:
>
>> +1 (binding)
>>
>> - Checked all issues that went into the release (I found one JIRA issue
>> that has been incorrectly marked)
>> - Built from source (On my machine SelfJoinDeadlockITCase is failing due
>> to a timeout, in the IDE it works correctly. I guess my machine was just
>> too busy.)
>> - Run some end-to-end tests locally with success
>>
>> Regards,
>> Timo
>>
>>
>> Am 20.09.18 um 11:24 schrieb Tzu-Li (Gordon) Tai:
>> > +1 (binding)
>> >
>> > - Verified checksums / signatures
>> > - Checked announcement PR in flink-web
>> > - No new / changed dependencies
>> > - Built from source (Hadoop-free, Scala 2.11)
>> > - Run end-to-end tests locally, passes
>> >
>> > On Thu, Sep 20, 2018 at 5:03 AM Fabian Hueske 
>> wrote:
>> >
>> >> +1 binding
>> >>
>> >> * I checked the diffs and did not find any added dependencies or
>> updated
>> >> dependency versions.
>> >> * I checked the sha hash and signatures of all release artifacts.
>> >>
>> >> Best, Fabian
>> >>
>> >> 2018-09-15 23:26 GMT+02:00 Till Rohrmann :
>> >>
>> >>> Hi everyone,
>> >>> Please review and vote on the release candidate #1 for the version
>> 1.5.4,
>> >>> as follows:
>> >>> [ ] +1, Approve the release
>> >>> [ ] -1, Do not approve the release (please provide specific comments)
>> >>>
>> >>>
>> >>> The complete staging area is available for your review, which
>> includes:
>> >>> * JIRA release notes [1],
>> >>> * the official Apache source release and binary convenience releases
>> to
>> >> be
>> >>> deployed to dist.apache.org [2], which are signed with the key
>> >>> with fingerprint 1F302569A96CFFD5 [3],
>> >>> * all artifacts to be deployed to the Maven Central Repository [4],
>> >>> * source code tag "release-1.5.4-rc1" [5],
>> >>> * website pull request listing the new release and adding announcement
>> >>> blog post [6].
>> >>>
>> >>> The vote will be open for at least 72 hours. It is adopted by majority
>> >>> approval, with at least 3 PMC affirmative votes.
>> >>>
>> >>> [1]
>> >>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?
>> >>> projectId=12315522=12343899
>> >>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.5.4/
>> >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> >>> [4]
>> >> https://repository.apache.org/content/repositories/orgapacheflink-1181
>> >>> [5] https://github.com/apache/flink/tree/release-1.5.4-rc1
>> >>> [6] https://github.com/apache/flink-web/pull/123
>> >>>
>> >>> Cheers,
>> >>> Till
>> >>>
>> >>> Pro-tip: you can create a settings.xml file with these contents:
>> >>>
>> >>> 
>> >>> 
>> >>>flink-1.6.0
>> >>> 
>> >>> 
>> >>>
>> >>>  flink-1.6.0
>> >>>  
>> >>>
>> >>>  flink-1.6.0
>> >>>  
>> >>>
>> >>>
>> https://repository.apache.org/content/repositories/orgapacheflink-1181/
>> >>> <
>> https://repository.apache.org/content/repositories/orgapacheflink-1178/
>> >>>
>> >>>  
>> >>>
>> >>>
>> >>>  archetype
>> >>>  
>> >>>
>> >>>
>> https://repository.apache.org/content/repositories/orgapacheflink-1181/
>> >>> <
>> https://repository.apache.org/content/repositories/orgapacheflink-1178/
>> >>>
>> >>>  
>> >>>
>> >>>  
>> >>>
>> >>> 
>> >>> 
>> >>>
>> >>> And reference that in your maven commands via --settings
>> >>> path/to/settings.xml. This is useful for creating a quickstart based
>> on
>> >> the
>> >>> staged release and for building against the staged jars.
>> >>>
>>
>>


Re: [VOTE] Release 1.6.1, release candidate #1

2018-09-20 Thread Till Rohrmann
Thanks everyone for testing and voting on this RC. Hereby I close the vote.
The result will be announced in a separate thread.

Cheers,
Till

On Thu, Sep 20, 2018 at 3:26 PM Timo Walther  wrote:

> +1 (binding)
>
> - Checked all issues that went into the release (I found two JIRA issue
> that have been incorrectly marked)
> - Built the source and verify locally successfully
> - Run a couple of end-to-end tests successfully
>
> Regards,
> Timo
>
>
> Am 20.09.18 um 11:13 schrieb Tzu-Li (Gordon) Tai:
> > +1 (binding)
> >
> > - Verified checksums / signatures
> > - Checked announcement PR in flink-web
> > - Built Flink from sources, test + build passes (Hadoop-free, Scala 2.11)
> > - Ran Elasticsearch 6 sink, using quickstart POM.
> > - Ran end-to-end tests locally, passes
> >
> > On Thu, Sep 20, 2018 at 4:31 PM Till Rohrmann 
> wrote:
> >
> >> +1 (binding)
> >>
> >> - Verified checksums and signatures
> >> - Ran all end-to-end tests
> >> - Executed Jepsen test suite (including test for standby JobManager)
> >> - Build Flink from sources
> >> - Verified that no new dependencies were added
> >>
> >> Cheers,
> >> Till
> >>
> >> On Thu, Sep 20, 2018 at 9:23 AM Till Rohrmann 
> >> wrote:
> >>
> >>> I would not block the release on FLINK-10243 since you can always
> >>> deactivate the latency metrics. Instead we should discuss whether to
> back
> >>> port this scalability improvement and include it in a next bug fix
> >> release.
> >>> For that, I suggest to write on the JIRA thread.
> >>>
> >>> Cheers,
> >>> Till
> >>>
> >>> On Thu, Sep 20, 2018 at 8:52 AM shimin yang 
> wrote:
> >>>
>  -1
> 
>  Could you merge the FLINK-10243 into release 1.6.1. I think the
>  configurable latency metrics will be quite useful and not much work to
>  merge.
> 
>  Best,
>  Shimin
> 
>  vino yang  于2018年9月20日周四 下午2:08写道:
> 
> > +1
> >
> > I checked the new Flink version in the  root pom file.
> > I checked the announcement blog post and make sure the version number
> >> is
> > right.
> > I checked out the source code and ran mvn package (without test)
> >
> > Thanks, vino.
> >
> > Fabian Hueske  于2018年9月20日周四 上午4:54写道:
> >
> >> +1 binding
> >>
> >> * I checked the diffs and did not find any added dependencies or
>  updated
> >> dependency versions.
> >> * I checked the sha hash and signatures of all release artifacts.
> >>
> >> Best, Fabian
> >>
> >> 2018-09-19 11:43 GMT+02:00 Gary Yao :
> >>
> >>> +1 (non-binding)
> >>>
> >>> Ran test suite from the flink-jepsen project on AWS EC2 without
>  issues.
> >>> Best,
> >>> Gary
> >>>
> >>> On Sat, Sep 15, 2018 at 11:32 PM, Till Rohrmann <
>  trohrm...@apache.org>
> >>> wrote:
> >>>
>  Hi everyone,
>  Please review and vote on the release candidate #1 for the
> >> version
> >> 1.6.1,
>  as follows:
>  [ ] +1, Approve the release
>  [ ] -1, Do not approve the release (please provide specific
>  comments)
> 
>  The complete staging area is available for your review, which
> > includes:
>  * JIRA release notes [1],
>  * the official Apache source release and binary convenience
>  releases
> > to
> >>> be
>  deployed to dist.apache.org [2], which are signed with the key
>  with fingerprint 1F302569A96CFFD5 [3],
>  * all artifacts to be deployed to the Maven Central Repository
>  [4],
>  * source code tag "release-1.6.1-rc1" [5],
>  * website pull request listing the new release and adding
> > announcement
>  blog post [6].
> 
>  The vote will be open for at least 72 hours. It is adopted by
> > majority
>  approval, with at least 3 PMC affirmative votes.
> 
>  [1]
>  https://issues.apache.org/jira/secure/ReleaseNote.jspa?
>  projectId=12315522=12343752
>  [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.6.1/
>  [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>  [4] https://repository.apache.org/content/repositories/
> >>> orgapacheflink-1180
>  [5] https://github.com/apache/flink/tree/release-1.6.1-rc1
>  [6] https://github.com/apache/flink-web/pull/124
> 
>  Cheers,
>  Till
> 
>  Pro-tip: you can create a settings.xml file with these contents:
> 
>  
>  
> flink-1.6.0
>  
>  
> 
>   flink-1.6.0
>   
> 
>   flink-1.6.0
>   
> 
> 
> 
> https://repository.apache.org/content/repositories/orgapacheflink-1180/
>  <
> 
> https://repository.apache.org/content/repositories/orgapacheflink-1178/
>   
> 
> 
> 

Re: [VOTE] Release 1.6.1, release candidate #1

2018-09-20 Thread Timo Walther

+1 (binding)

- Checked all issues that went into the release (I found two JIRA issue 
that have been incorrectly marked)

- Built the source and verify locally successfully
- Run a couple of end-to-end tests successfully

Regards,
Timo


Am 20.09.18 um 11:13 schrieb Tzu-Li (Gordon) Tai:

+1 (binding)

- Verified checksums / signatures
- Checked announcement PR in flink-web
- Built Flink from sources, test + build passes (Hadoop-free, Scala 2.11)
- Ran Elasticsearch 6 sink, using quickstart POM.
- Ran end-to-end tests locally, passes

On Thu, Sep 20, 2018 at 4:31 PM Till Rohrmann  wrote:


+1 (binding)

- Verified checksums and signatures
- Ran all end-to-end tests
- Executed Jepsen test suite (including test for standby JobManager)
- Build Flink from sources
- Verified that no new dependencies were added

Cheers,
Till

On Thu, Sep 20, 2018 at 9:23 AM Till Rohrmann 
wrote:


I would not block the release on FLINK-10243 since you can always
deactivate the latency metrics. Instead we should discuss whether to back
port this scalability improvement and include it in a next bug fix

release.

For that, I suggest to write on the JIRA thread.

Cheers,
Till

On Thu, Sep 20, 2018 at 8:52 AM shimin yang  wrote:


-1

Could you merge the FLINK-10243 into release 1.6.1. I think the
configurable latency metrics will be quite useful and not much work to
merge.

Best,
Shimin

vino yang  于2018年9月20日周四 下午2:08写道:


+1

I checked the new Flink version in the  root pom file.
I checked the announcement blog post and make sure the version number

is

right.
I checked out the source code and ran mvn package (without test)

Thanks, vino.

Fabian Hueske  于2018年9月20日周四 上午4:54写道:


+1 binding

* I checked the diffs and did not find any added dependencies or

updated

dependency versions.
* I checked the sha hash and signatures of all release artifacts.

Best, Fabian

2018-09-19 11:43 GMT+02:00 Gary Yao :


+1 (non-binding)

Ran test suite from the flink-jepsen project on AWS EC2 without

issues.

Best,
Gary

On Sat, Sep 15, 2018 at 11:32 PM, Till Rohrmann <

trohrm...@apache.org>

wrote:


Hi everyone,
Please review and vote on the release candidate #1 for the

version

1.6.1,

as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific

comments)


The complete staging area is available for your review, which

includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience

releases

to

be

deployed to dist.apache.org [2], which are signed with the key
with fingerprint 1F302569A96CFFD5 [3],
* all artifacts to be deployed to the Maven Central Repository

[4],

* source code tag "release-1.6.1-rc1" [5],
* website pull request listing the new release and adding

announcement

blog post [6].

The vote will be open for at least 72 hours. It is adopted by

majority

approval, with at least 3 PMC affirmative votes.

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?
projectId=12315522=12343752
[2] https://dist.apache.org/repos/dist/dev/flink/flink-1.6.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/

orgapacheflink-1180

[5] https://github.com/apache/flink/tree/release-1.6.1-rc1
[6] https://github.com/apache/flink-web/pull/124

Cheers,
Till

Pro-tip: you can create a settings.xml file with these contents:



   flink-1.6.0


   
 flink-1.6.0
 
   
 flink-1.6.0
 



https://repository.apache.org/content/repositories/orgapacheflink-1180/

<

https://repository.apache.org/content/repositories/orgapacheflink-1178/

 
   
   
 archetype
 



https://repository.apache.org/content/repositories/orgapacheflink-1180/

<

https://repository.apache.org/content/repositories/orgapacheflink-1178/

 
   
 
   



And reference that in your maven commands via --settings
path/to/settings.xml. This is useful for creating a quickstart

based

on

the

staged release and for building against the staged jars.





[jira] [Created] (FLINK-10377) Remove precondition in TwoPhaseCommitSinkFunction.notifyCheckpointComplete

2018-09-20 Thread Stefan Richter (JIRA)
Stefan Richter created FLINK-10377:
--

 Summary: Remove precondition in 
TwoPhaseCommitSinkFunction.notifyCheckpointComplete
 Key: FLINK-10377
 URL: https://issues.apache.org/jira/browse/FLINK-10377
 Project: Flink
  Issue Type: Bug
  Components: Streaming Connectors
Affects Versions: 1.6.0, 1.5.0
Reporter: Stefan Richter
Assignee: Stefan Richter


The precondition {{checkState(pendingTransactionIterator.hasNext(), "checkpoint 
completed, but no transaction pending");}} in 
{{TwoPhaseCommitSinkFunction.notifyCheckpointComplete()}} seems to strict, 
because checkpoints can overtake checkpoints and will fail the precondition. In 
this case the commit was already performed by the first notification and 
subsumes the late checkpoint. I think the check can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10376) BlobCacheCleanupTest.testPermanentBlobCleanup failed on Travis

2018-09-20 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-10376:
-

 Summary: BlobCacheCleanupTest.testPermanentBlobCleanup failed on 
Travis
 Key: FLINK-10376
 URL: https://issues.apache.org/jira/browse/FLINK-10376
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.7.0
Reporter: Till Rohrmann
 Fix For: 1.7.0


The {{BlobCacheCleanupTest.testPermanentBlobCleanup}} failed on Travis with the 
following exception:
{code}
testPermanentBlobCleanup(org.apache.flink.runtime.blob.BlobCacheCleanupTest)  
Time elapsed: 1.021 sec  <<< ERROR!
java.io.IOException: Cannot create directory 
'/tmp/junit6933344779576098111/junit6230481778643276963/blobStore-77f235ff-1721-4c30-9bec-db4004fe8859/job_9ca4b9530b367af6f554dad6458ca3ad'.
at 
org.apache.flink.runtime.blob.BlobUtils.mkdirTolerateExisting(BlobUtils.java:214)
at 
org.apache.flink.runtime.blob.BlobUtils.getStorageLocation(BlobUtils.java:237)
at 
org.apache.flink.runtime.blob.PermanentBlobCache.getStorageLocation(PermanentBlobCache.java:222)
at 
org.apache.flink.runtime.blob.BlobServerCleanupTest.checkFilesExist(BlobServerCleanupTest.java:213)
at 
org.apache.flink.runtime.blob.BlobCacheCleanupTest.verifyJobCleanup(BlobCacheCleanupTest.java:432)
at 
org.apache.flink.runtime.blob.BlobCacheCleanupTest.testPermanentBlobCleanup(BlobCacheCleanupTest.java:133)
{code}

https://api.travis-ci.org/v3/job/430910115/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [Possible Bug] Savepoints Akka Timeout

2018-09-20 Thread Till Rohrmann
Hi Sayat,

I think your problem might be caused by
https://issues.apache.org/jira/browse/FLINK-10193. This will be fixed with
the next bug fix release which will happen in the next days.

In the future, please post these kind of questions to u...@flink.apache.org.
The dev mailing list is intended for Flink development discussions.

Cheers,
Till

On Thu, Sep 20, 2018 at 2:04 PM Sayat Satybaldiyev 
wrote:

> Dear Flink Community!
>
> I've initially posted my question on SO:
>
> https://stackoverflow.com/questions/52422499/flink-savepoints-akka-pattern-asktimeoutexception-ask-timed-out-on-actorakka
>
> However, after further investigation, I think it's the issue with Flink
> 1.6.0 cluster rather me doing something wrong.
>
> In nutshell, I'm trying to create a savepoint in Flink job with a rocksdb
> backend. However, flink cli give me error: CompletionException:
> akka.pattern.AskTimeoutException: Ask timed out on
> [Actor[akka://flink/user/jobmanager_0#1140973613]] after [30 ms].
> Sender[null] sent message of type
> "org.apache.flink.runtime.rpc.messages.LocalFencedMessage"
>
> I've checked JM and TM logs and found that TM is doing savepoints of
> operators without an exception and in couple seconds.
>
> flink savepoint --jobmanager xx-xx-5:8081 e569cf53baecae9cb4fa794d590d670f
> hdfs://foundationhdfs/user/flink/savepoint3
>
> I've grepped by savepoint3 and savepoint4 and everything looks good to me.
>
> Could anyone please to help me understand if it's a bug or feature of Flink
> 1.6.0? ;)
>
> Flink JM & TM logs:
>
> https://drive.google.com/file/d/1Mg0qKJDOkYY14iM_gNO4QYjLB81JGzig/view?usp=sharing
>
> https://drive.google.com/file/d/19l2HDR9bB7NC-SRyxvKY4qAN0BD650uA/view?usp=sharing
>


Re: [VOTE] Release 1.5.4, release candidate #1

2018-09-20 Thread Till Rohrmann
+1 (binding)

- Verified checksums and signatures
- Ran all end-to-end tests
- Executed Jepsen test suite (including test for standby JobManager)
- Build Flink from sources
- Verified that no new dependencies were added

Cheers,
Till

On Thu, Sep 20, 2018 at 1:54 PM Timo Walther  wrote:

> +1 (binding)
>
> - Checked all issues that went into the release (I found one JIRA issue
> that has been incorrectly marked)
> - Built from source (On my machine SelfJoinDeadlockITCase is failing due
> to a timeout, in the IDE it works correctly. I guess my machine was just
> too busy.)
> - Run some end-to-end tests locally with success
>
> Regards,
> Timo
>
>
> Am 20.09.18 um 11:24 schrieb Tzu-Li (Gordon) Tai:
> > +1 (binding)
> >
> > - Verified checksums / signatures
> > - Checked announcement PR in flink-web
> > - No new / changed dependencies
> > - Built from source (Hadoop-free, Scala 2.11)
> > - Run end-to-end tests locally, passes
> >
> > On Thu, Sep 20, 2018 at 5:03 AM Fabian Hueske  wrote:
> >
> >> +1 binding
> >>
> >> * I checked the diffs and did not find any added dependencies or updated
> >> dependency versions.
> >> * I checked the sha hash and signatures of all release artifacts.
> >>
> >> Best, Fabian
> >>
> >> 2018-09-15 23:26 GMT+02:00 Till Rohrmann :
> >>
> >>> Hi everyone,
> >>> Please review and vote on the release candidate #1 for the version
> 1.5.4,
> >>> as follows:
> >>> [ ] +1, Approve the release
> >>> [ ] -1, Do not approve the release (please provide specific comments)
> >>>
> >>>
> >>> The complete staging area is available for your review, which includes:
> >>> * JIRA release notes [1],
> >>> * the official Apache source release and binary convenience releases to
> >> be
> >>> deployed to dist.apache.org [2], which are signed with the key
> >>> with fingerprint 1F302569A96CFFD5 [3],
> >>> * all artifacts to be deployed to the Maven Central Repository [4],
> >>> * source code tag "release-1.5.4-rc1" [5],
> >>> * website pull request listing the new release and adding announcement
> >>> blog post [6].
> >>>
> >>> The vote will be open for at least 72 hours. It is adopted by majority
> >>> approval, with at least 3 PMC affirmative votes.
> >>>
> >>> [1]
> >>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> >>> projectId=12315522=12343899
> >>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.5.4/
> >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >>> [4]
> >> https://repository.apache.org/content/repositories/orgapacheflink-1181
> >>> [5] https://github.com/apache/flink/tree/release-1.5.4-rc1
> >>> [6] https://github.com/apache/flink-web/pull/123
> >>>
> >>> Cheers,
> >>> Till
> >>>
> >>> Pro-tip: you can create a settings.xml file with these contents:
> >>>
> >>> 
> >>> 
> >>>flink-1.6.0
> >>> 
> >>> 
> >>>
> >>>  flink-1.6.0
> >>>  
> >>>
> >>>  flink-1.6.0
> >>>  
> >>>
> >>>
> https://repository.apache.org/content/repositories/orgapacheflink-1181/
> >>> <
> https://repository.apache.org/content/repositories/orgapacheflink-1178/
> >>>
> >>>  
> >>>
> >>>
> >>>  archetype
> >>>  
> >>>
> >>>
> https://repository.apache.org/content/repositories/orgapacheflink-1181/
> >>> <
> https://repository.apache.org/content/repositories/orgapacheflink-1178/
> >>>
> >>>  
> >>>
> >>>  
> >>>
> >>> 
> >>> 
> >>>
> >>> And reference that in your maven commands via --settings
> >>> path/to/settings.xml. This is useful for creating a quickstart based on
> >> the
> >>> staged release and for building against the staged jars.
> >>>
>
>


[Possible Bug] Savepoints Akka Timeout

2018-09-20 Thread Sayat Satybaldiyev
Dear Flink Community!

I've initially posted my question on SO:
https://stackoverflow.com/questions/52422499/flink-savepoints-akka-pattern-asktimeoutexception-ask-timed-out-on-actorakka

However, after further investigation, I think it's the issue with Flink
1.6.0 cluster rather me doing something wrong.

In nutshell, I'm trying to create a savepoint in Flink job with a rocksdb
backend. However, flink cli give me error: CompletionException:
akka.pattern.AskTimeoutException: Ask timed out on
[Actor[akka://flink/user/jobmanager_0#1140973613]] after [30 ms].
Sender[null] sent message of type
"org.apache.flink.runtime.rpc.messages.LocalFencedMessage"

I've checked JM and TM logs and found that TM is doing savepoints of
operators without an exception and in couple seconds.

flink savepoint --jobmanager xx-xx-5:8081 e569cf53baecae9cb4fa794d590d670f
hdfs://foundationhdfs/user/flink/savepoint3

I've grepped by savepoint3 and savepoint4 and everything looks good to me.

Could anyone please to help me understand if it's a bug or feature of Flink
1.6.0? ;)

Flink JM & TM logs:
https://drive.google.com/file/d/1Mg0qKJDOkYY14iM_gNO4QYjLB81JGzig/view?usp=sharing
https://drive.google.com/file/d/19l2HDR9bB7NC-SRyxvKY4qAN0BD650uA/view?usp=sharing


Re: [VOTE] Release 1.5.4, release candidate #1

2018-09-20 Thread Timo Walther

+1 (binding)

- Checked all issues that went into the release (I found one JIRA issue 
that has been incorrectly marked)
- Built from source (On my machine SelfJoinDeadlockITCase is failing due 
to a timeout, in the IDE it works correctly. I guess my machine was just 
too busy.)

- Run some end-to-end tests locally with success

Regards,
Timo


Am 20.09.18 um 11:24 schrieb Tzu-Li (Gordon) Tai:

+1 (binding)

- Verified checksums / signatures
- Checked announcement PR in flink-web
- No new / changed dependencies
- Built from source (Hadoop-free, Scala 2.11)
- Run end-to-end tests locally, passes

On Thu, Sep 20, 2018 at 5:03 AM Fabian Hueske  wrote:


+1 binding

* I checked the diffs and did not find any added dependencies or updated
dependency versions.
* I checked the sha hash and signatures of all release artifacts.

Best, Fabian

2018-09-15 23:26 GMT+02:00 Till Rohrmann :


Hi everyone,
Please review and vote on the release candidate #1 for the version 1.5.4,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to

be

deployed to dist.apache.org [2], which are signed with the key
with fingerprint 1F302569A96CFFD5 [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.4-rc1" [5],
* website pull request listing the new release and adding announcement
blog post [6].

The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?
projectId=12315522=12343899
[2] https://dist.apache.org/repos/dist/dev/flink/flink-1.5.4/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]

https://repository.apache.org/content/repositories/orgapacheflink-1181

[5] https://github.com/apache/flink/tree/release-1.5.4-rc1
[6] https://github.com/apache/flink-web/pull/123

Cheers,
Till

Pro-tip: you can create a settings.xml file with these contents:



   flink-1.6.0


   
 flink-1.6.0
 
   
 flink-1.6.0
 

https://repository.apache.org/content/repositories/orgapacheflink-1181/


[jira] [Created] (FLINK-10375) ExceptionInChainedStubException hides wrapped exception in cause

2018-09-20 Thread Mike Pedersen (JIRA)
Mike Pedersen created FLINK-10375:
-

 Summary: ExceptionInChainedStubException hides wrapped exception 
in cause
 Key: FLINK-10375
 URL: https://issues.apache.org/jira/browse/FLINK-10375
 Project: Flink
  Issue Type: Bug
  Components: Core
Reporter: Mike Pedersen


ExceptionInChainedStubException does not have the wrapped exception as the 
cause. This creates generally unhelpful exception traces like this:
{code:java}
org.apache.beam.sdk.util.UserCodeException: 
org.apache.flink.runtime.operators.chaining.ExceptionInChainedStubException
at 
org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at 
org.apache.beam.sdk.io.WriteFiles$ApplyShardingKeyFn$DoFnInvoker.invokeProcessElement(Unknown
 Source)
at 
org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:185)
at 
org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:149)
at 
org.apache.beam.runners.flink.metrics.DoFnRunnerWithMetricsUpdate.processElement(DoFnRunnerWithMetricsUpdate.java:66)
at 
org.apache.beam.runners.flink.translation.functions.FlinkDoFnFunction.mapPartition(FlinkDoFnFunction.java:120)
at 
org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:490)
at 
org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:355)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.flink.runtime.operators.chaining.ExceptionInChainedStubException
at 
org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.collect(ChainedFlatMapDriver.java:82)
at 
org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
at 
org.apache.beam.runners.flink.translation.functions.FlinkDoFnFunction$DoFnOutputManager.output(FlinkDoFnFunction.java:149)
at 
org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:219)
at 
org.apache.beam.runners.core.SimpleDoFnRunner.access$700(SimpleDoFnRunner.java:69)
at 
org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:517)
at 
org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:505)
at 
org.apache.beam.sdk.io.WriteFiles$ApplyShardingKeyFn.processElement(WriteFiles.java:686)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release 1.5.4, release candidate #1

2018-09-20 Thread Tzu-Li (Gordon) Tai
+1 (binding)

- Verified checksums / signatures
- Checked announcement PR in flink-web
- No new / changed dependencies
- Built from source (Hadoop-free, Scala 2.11)
- Run end-to-end tests locally, passes

On Thu, Sep 20, 2018 at 5:03 AM Fabian Hueske  wrote:

> +1 binding
>
> * I checked the diffs and did not find any added dependencies or updated
> dependency versions.
> * I checked the sha hash and signatures of all release artifacts.
>
> Best, Fabian
>
> 2018-09-15 23:26 GMT+02:00 Till Rohrmann :
>
> > Hi everyone,
> > Please review and vote on the release candidate #1 for the version 1.5.4,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience releases to
> be
> > deployed to dist.apache.org [2], which are signed with the key
> > with fingerprint 1F302569A96CFFD5 [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-1.5.4-rc1" [5],
> > * website pull request listing the new release and adding announcement
> > blog post [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > [1]
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > projectId=12315522=12343899
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.5.4/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1181
> > [5] https://github.com/apache/flink/tree/release-1.5.4-rc1
> > [6] https://github.com/apache/flink-web/pull/123
> >
> > Cheers,
> > Till
> >
> > Pro-tip: you can create a settings.xml file with these contents:
> >
> > 
> > 
> >   flink-1.6.0
> > 
> > 
> >   
> > flink-1.6.0
> > 
> >   
> > flink-1.6.0
> > 
> >
> > https://repository.apache.org/content/repositories/orgapacheflink-1181/
> >  >
> > 
> >   
> >   
> > archetype
> > 
> >
> > https://repository.apache.org/content/repositories/orgapacheflink-1181/
> >  >
> > 
> >   
> > 
> >   
> > 
> > 
> >
> > And reference that in your maven commands via --settings
> > path/to/settings.xml. This is useful for creating a quickstart based on
> the
> > staged release and for building against the staged jars.
> >
>


Re: [VOTE] Release 1.5.4, release candidate #1

2018-09-20 Thread Gary Yao
+1 (non-binding)

Ran test suite from the flink-jepsen project on AWS EC2 without issues.

Best,
Gary


On Wed, Sep 19, 2018 at 11:02 PM, Fabian Hueske  wrote:

> +1 binding
>
> * I checked the diffs and did not find any added dependencies or updated
> dependency versions.
> * I checked the sha hash and signatures of all release artifacts.
>
> Best, Fabian
>
> 2018-09-15 23:26 GMT+02:00 Till Rohrmann :
>
> > Hi everyone,
> > Please review and vote on the release candidate #1 for the version 1.5.4,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience releases to
> be
> > deployed to dist.apache.org [2], which are signed with the key
> > with fingerprint 1F302569A96CFFD5 [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-1.5.4-rc1" [5],
> > * website pull request listing the new release and adding announcement
> > blog post [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > [1]
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > projectId=12315522=12343899
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.5.4/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4] https://repository.apache.org/content/repositories/
> orgapacheflink-1181
> > [5] https://github.com/apache/flink/tree/release-1.5.4-rc1
> > [6] https://github.com/apache/flink-web/pull/123
> >
> > Cheers,
> > Till
> >
> > Pro-tip: you can create a settings.xml file with these contents:
> >
> > 
> > 
> >   flink-1.6.0
> > 
> > 
> >   
> > flink-1.6.0
> > 
> >   
> > flink-1.6.0
> > 
> >
> > https://repository.apache.org/content/repositories/orgapacheflink-1181/
> >  >
> > 
> >   
> >   
> > archetype
> > 
> >
> > https://repository.apache.org/content/repositories/orgapacheflink-1181/
> >  >
> > 
> >   
> > 
> >   
> > 
> > 
> >
> > And reference that in your maven commands via --settings
> > path/to/settings.xml. This is useful for creating a quickstart based on
> the
> > staged release and for building against the staged jars.
> >
>


Re: [VOTE] Release 1.6.1, release candidate #1

2018-09-20 Thread Tzu-Li (Gordon) Tai
+1 (binding)

- Verified checksums / signatures
- Checked announcement PR in flink-web
- Built Flink from sources, test + build passes (Hadoop-free, Scala 2.11)
- Ran Elasticsearch 6 sink, using quickstart POM.
- Ran end-to-end tests locally, passes

On Thu, Sep 20, 2018 at 4:31 PM Till Rohrmann  wrote:

> +1 (binding)
>
> - Verified checksums and signatures
> - Ran all end-to-end tests
> - Executed Jepsen test suite (including test for standby JobManager)
> - Build Flink from sources
> - Verified that no new dependencies were added
>
> Cheers,
> Till
>
> On Thu, Sep 20, 2018 at 9:23 AM Till Rohrmann 
> wrote:
>
> > I would not block the release on FLINK-10243 since you can always
> > deactivate the latency metrics. Instead we should discuss whether to back
> > port this scalability improvement and include it in a next bug fix
> release.
> > For that, I suggest to write on the JIRA thread.
> >
> > Cheers,
> > Till
> >
> > On Thu, Sep 20, 2018 at 8:52 AM shimin yang  wrote:
> >
> >> -1
> >>
> >> Could you merge the FLINK-10243 into release 1.6.1. I think the
> >> configurable latency metrics will be quite useful and not much work to
> >> merge.
> >>
> >> Best,
> >> Shimin
> >>
> >> vino yang  于2018年9月20日周四 下午2:08写道:
> >>
> >> > +1
> >> >
> >> > I checked the new Flink version in the  root pom file.
> >> > I checked the announcement blog post and make sure the version number
> is
> >> > right.
> >> > I checked out the source code and ran mvn package (without test)
> >> >
> >> > Thanks, vino.
> >> >
> >> > Fabian Hueske  于2018年9月20日周四 上午4:54写道:
> >> >
> >> > > +1 binding
> >> > >
> >> > > * I checked the diffs and did not find any added dependencies or
> >> updated
> >> > > dependency versions.
> >> > > * I checked the sha hash and signatures of all release artifacts.
> >> > >
> >> > > Best, Fabian
> >> > >
> >> > > 2018-09-19 11:43 GMT+02:00 Gary Yao :
> >> > >
> >> > > > +1 (non-binding)
> >> > > >
> >> > > > Ran test suite from the flink-jepsen project on AWS EC2 without
> >> issues.
> >> > > >
> >> > > > Best,
> >> > > > Gary
> >> > > >
> >> > > > On Sat, Sep 15, 2018 at 11:32 PM, Till Rohrmann <
> >> trohrm...@apache.org>
> >> > > > wrote:
> >> > > >
> >> > > > > Hi everyone,
> >> > > > > Please review and vote on the release candidate #1 for the
> version
> >> > > 1.6.1,
> >> > > > > as follows:
> >> > > > > [ ] +1, Approve the release
> >> > > > > [ ] -1, Do not approve the release (please provide specific
> >> comments)
> >> > > > >
> >> > > > >
> >> > > > > The complete staging area is available for your review, which
> >> > includes:
> >> > > > > * JIRA release notes [1],
> >> > > > > * the official Apache source release and binary convenience
> >> releases
> >> > to
> >> > > > be
> >> > > > > deployed to dist.apache.org [2], which are signed with the key
> >> > > > > with fingerprint 1F302569A96CFFD5 [3],
> >> > > > > * all artifacts to be deployed to the Maven Central Repository
> >> [4],
> >> > > > > * source code tag "release-1.6.1-rc1" [5],
> >> > > > > * website pull request listing the new release and adding
> >> > announcement
> >> > > > > blog post [6].
> >> > > > >
> >> > > > > The vote will be open for at least 72 hours. It is adopted by
> >> > majority
> >> > > > > approval, with at least 3 PMC affirmative votes.
> >> > > > >
> >> > > > > [1]
> >> > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> >> > > > > projectId=12315522=12343752
> >> > > > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.6.1/
> >> > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >> > > > > [4] https://repository.apache.org/content/repositories/
> >> > > > orgapacheflink-1180
> >> > > > > [5] https://github.com/apache/flink/tree/release-1.6.1-rc1
> >> > > > > [6] https://github.com/apache/flink-web/pull/124
> >> > > > >
> >> > > > > Cheers,
> >> > > > > Till
> >> > > > >
> >> > > > > Pro-tip: you can create a settings.xml file with these contents:
> >> > > > >
> >> > > > > 
> >> > > > > 
> >> > > > >   flink-1.6.0
> >> > > > > 
> >> > > > > 
> >> > > > >   
> >> > > > > flink-1.6.0
> >> > > > > 
> >> > > > >   
> >> > > > > flink-1.6.0
> >> > > > > 
> >> > > > >
> >> > > > >
> >> > >
> >> https://repository.apache.org/content/repositories/orgapacheflink-1180/
> >> > > > > <
> >> > >
> >> https://repository.apache.org/content/repositories/orgapacheflink-1178/
> >> > > > >
> >> > > > > 
> >> > > > >   
> >> > > > >   
> >> > > > > archetype
> >> > > > > 
> >> > > > >
> >> > > > >
> >> > >
> >> https://repository.apache.org/content/repositories/orgapacheflink-1180/
> >> > > > > <
> >> > >
> >> https://repository.apache.org/content/repositories/orgapacheflink-1178/
> >> > > > >
> >> > > > > 
> >> > > > >   
> >> > > > > 
> >> > > > >   
> >> > > > > 
> >> > > > > 
> >> > > > >
> >> > > > > And reference that in your maven commands via --settings
> >> > > > > path/to/settings.xml. This is useful for 

Re: [VOTE] Release 1.6.1, release candidate #1

2018-09-20 Thread Till Rohrmann
+1 (binding)

- Verified checksums and signatures
- Ran all end-to-end tests
- Executed Jepsen test suite (including test for standby JobManager)
- Build Flink from sources
- Verified that no new dependencies were added

Cheers,
Till

On Thu, Sep 20, 2018 at 9:23 AM Till Rohrmann  wrote:

> I would not block the release on FLINK-10243 since you can always
> deactivate the latency metrics. Instead we should discuss whether to back
> port this scalability improvement and include it in a next bug fix release.
> For that, I suggest to write on the JIRA thread.
>
> Cheers,
> Till
>
> On Thu, Sep 20, 2018 at 8:52 AM shimin yang  wrote:
>
>> -1
>>
>> Could you merge the FLINK-10243 into release 1.6.1. I think the
>> configurable latency metrics will be quite useful and not much work to
>> merge.
>>
>> Best,
>> Shimin
>>
>> vino yang  于2018年9月20日周四 下午2:08写道:
>>
>> > +1
>> >
>> > I checked the new Flink version in the  root pom file.
>> > I checked the announcement blog post and make sure the version number is
>> > right.
>> > I checked out the source code and ran mvn package (without test)
>> >
>> > Thanks, vino.
>> >
>> > Fabian Hueske  于2018年9月20日周四 上午4:54写道:
>> >
>> > > +1 binding
>> > >
>> > > * I checked the diffs and did not find any added dependencies or
>> updated
>> > > dependency versions.
>> > > * I checked the sha hash and signatures of all release artifacts.
>> > >
>> > > Best, Fabian
>> > >
>> > > 2018-09-19 11:43 GMT+02:00 Gary Yao :
>> > >
>> > > > +1 (non-binding)
>> > > >
>> > > > Ran test suite from the flink-jepsen project on AWS EC2 without
>> issues.
>> > > >
>> > > > Best,
>> > > > Gary
>> > > >
>> > > > On Sat, Sep 15, 2018 at 11:32 PM, Till Rohrmann <
>> trohrm...@apache.org>
>> > > > wrote:
>> > > >
>> > > > > Hi everyone,
>> > > > > Please review and vote on the release candidate #1 for the version
>> > > 1.6.1,
>> > > > > as follows:
>> > > > > [ ] +1, Approve the release
>> > > > > [ ] -1, Do not approve the release (please provide specific
>> comments)
>> > > > >
>> > > > >
>> > > > > The complete staging area is available for your review, which
>> > includes:
>> > > > > * JIRA release notes [1],
>> > > > > * the official Apache source release and binary convenience
>> releases
>> > to
>> > > > be
>> > > > > deployed to dist.apache.org [2], which are signed with the key
>> > > > > with fingerprint 1F302569A96CFFD5 [3],
>> > > > > * all artifacts to be deployed to the Maven Central Repository
>> [4],
>> > > > > * source code tag "release-1.6.1-rc1" [5],
>> > > > > * website pull request listing the new release and adding
>> > announcement
>> > > > > blog post [6].
>> > > > >
>> > > > > The vote will be open for at least 72 hours. It is adopted by
>> > majority
>> > > > > approval, with at least 3 PMC affirmative votes.
>> > > > >
>> > > > > [1]
>> > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
>> > > > > projectId=12315522=12343752
>> > > > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.6.1/
>> > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> > > > > [4] https://repository.apache.org/content/repositories/
>> > > > orgapacheflink-1180
>> > > > > [5] https://github.com/apache/flink/tree/release-1.6.1-rc1
>> > > > > [6] https://github.com/apache/flink-web/pull/124
>> > > > >
>> > > > > Cheers,
>> > > > > Till
>> > > > >
>> > > > > Pro-tip: you can create a settings.xml file with these contents:
>> > > > >
>> > > > > 
>> > > > > 
>> > > > >   flink-1.6.0
>> > > > > 
>> > > > > 
>> > > > >   
>> > > > > flink-1.6.0
>> > > > > 
>> > > > >   
>> > > > > flink-1.6.0
>> > > > > 
>> > > > >
>> > > > >
>> > >
>> https://repository.apache.org/content/repositories/orgapacheflink-1180/
>> > > > > <
>> > >
>> https://repository.apache.org/content/repositories/orgapacheflink-1178/
>> > > > >
>> > > > > 
>> > > > >   
>> > > > >   
>> > > > > archetype
>> > > > > 
>> > > > >
>> > > > >
>> > >
>> https://repository.apache.org/content/repositories/orgapacheflink-1180/
>> > > > > <
>> > >
>> https://repository.apache.org/content/repositories/orgapacheflink-1178/
>> > > > >
>> > > > > 
>> > > > >   
>> > > > > 
>> > > > >   
>> > > > > 
>> > > > > 
>> > > > >
>> > > > > And reference that in your maven commands via --settings
>> > > > > path/to/settings.xml. This is useful for creating a quickstart
>> based
>> > on
>> > > > the
>> > > > > staged release and for building against the staged jars.
>> > > > >
>> > > >
>> > >
>> >
>>
>


[jira] [Created] (FLINK-10374) [Map State] Let user value serializer handle null values

2018-09-20 Thread Andrey Zagrebin (JIRA)
Andrey Zagrebin created FLINK-10374:
---

 Summary: [Map State] Let user value serializer handle null values
 Key: FLINK-10374
 URL: https://issues.apache.org/jira/browse/FLINK-10374
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Reporter: Andrey Zagrebin
 Fix For: 2.0.0


Prior to Flink 2.0, value serializer in map state does not rely on user 
serializer to handle null value. Map serializer always prepends the serialized 
value with one byte boolean flag which signals whether it is null or not.

Map state state supports storing null user values for the following 
get/contains semantics:

remove(k); contains(k) -> false; put(k, null); get(k) -> null; contains(k) -> 
true;

It means that if user does not need this semantics and storing null values or 
the user value serializer already supports null values, one byte will be always 
wasted in the serialized value.

Rather than to hardcode null handling in map state serializer, it can be 
optional and up to the user decide the behaviour. If users want to add null 
support for their serializer, they could wrap it e.g. with NullableSerializer 
which can do the same prepending with null flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release 1.6.1, release candidate #1

2018-09-20 Thread Till Rohrmann
I would not block the release on FLINK-10243 since you can always
deactivate the latency metrics. Instead we should discuss whether to back
port this scalability improvement and include it in a next bug fix release.
For that, I suggest to write on the JIRA thread.

Cheers,
Till

On Thu, Sep 20, 2018 at 8:52 AM shimin yang  wrote:

> -1
>
> Could you merge the FLINK-10243 into release 1.6.1. I think the
> configurable latency metrics will be quite useful and not much work to
> merge.
>
> Best,
> Shimin
>
> vino yang  于2018年9月20日周四 下午2:08写道:
>
> > +1
> >
> > I checked the new Flink version in the  root pom file.
> > I checked the announcement blog post and make sure the version number is
> > right.
> > I checked out the source code and ran mvn package (without test)
> >
> > Thanks, vino.
> >
> > Fabian Hueske  于2018年9月20日周四 上午4:54写道:
> >
> > > +1 binding
> > >
> > > * I checked the diffs and did not find any added dependencies or
> updated
> > > dependency versions.
> > > * I checked the sha hash and signatures of all release artifacts.
> > >
> > > Best, Fabian
> > >
> > > 2018-09-19 11:43 GMT+02:00 Gary Yao :
> > >
> > > > +1 (non-binding)
> > > >
> > > > Ran test suite from the flink-jepsen project on AWS EC2 without
> issues.
> > > >
> > > > Best,
> > > > Gary
> > > >
> > > > On Sat, Sep 15, 2018 at 11:32 PM, Till Rohrmann <
> trohrm...@apache.org>
> > > > wrote:
> > > >
> > > > > Hi everyone,
> > > > > Please review and vote on the release candidate #1 for the version
> > > 1.6.1,
> > > > > as follows:
> > > > > [ ] +1, Approve the release
> > > > > [ ] -1, Do not approve the release (please provide specific
> comments)
> > > > >
> > > > >
> > > > > The complete staging area is available for your review, which
> > includes:
> > > > > * JIRA release notes [1],
> > > > > * the official Apache source release and binary convenience
> releases
> > to
> > > > be
> > > > > deployed to dist.apache.org [2], which are signed with the key
> > > > > with fingerprint 1F302569A96CFFD5 [3],
> > > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > > * source code tag "release-1.6.1-rc1" [5],
> > > > > * website pull request listing the new release and adding
> > announcement
> > > > > blog post [6].
> > > > >
> > > > > The vote will be open for at least 72 hours. It is adopted by
> > majority
> > > > > approval, with at least 3 PMC affirmative votes.
> > > > >
> > > > > [1]
> > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > > > > projectId=12315522=12343752
> > > > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.6.1/
> > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > > [4] https://repository.apache.org/content/repositories/
> > > > orgapacheflink-1180
> > > > > [5] https://github.com/apache/flink/tree/release-1.6.1-rc1
> > > > > [6] https://github.com/apache/flink-web/pull/124
> > > > >
> > > > > Cheers,
> > > > > Till
> > > > >
> > > > > Pro-tip: you can create a settings.xml file with these contents:
> > > > >
> > > > > 
> > > > > 
> > > > >   flink-1.6.0
> > > > > 
> > > > > 
> > > > >   
> > > > > flink-1.6.0
> > > > > 
> > > > >   
> > > > > flink-1.6.0
> > > > > 
> > > > >
> > > > >
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1180/
> > > > > <
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1178/
> > > > >
> > > > > 
> > > > >   
> > > > >   
> > > > > archetype
> > > > > 
> > > > >
> > > > >
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1180/
> > > > > <
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1178/
> > > > >
> > > > > 
> > > > >   
> > > > > 
> > > > >   
> > > > > 
> > > > > 
> > > > >
> > > > > And reference that in your maven commands via --settings
> > > > > path/to/settings.xml. This is useful for creating a quickstart
> based
> > on
> > > > the
> > > > > staged release and for building against the staged jars.
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] Release 1.6.1, release candidate #1

2018-09-20 Thread shimin yang
-1

Could you merge the FLINK-10243 into release 1.6.1. I think the
configurable latency metrics will be quite useful and not much work to
merge.

Best,
Shimin

vino yang  于2018年9月20日周四 下午2:08写道:

> +1
>
> I checked the new Flink version in the  root pom file.
> I checked the announcement blog post and make sure the version number is
> right.
> I checked out the source code and ran mvn package (without test)
>
> Thanks, vino.
>
> Fabian Hueske  于2018年9月20日周四 上午4:54写道:
>
> > +1 binding
> >
> > * I checked the diffs and did not find any added dependencies or updated
> > dependency versions.
> > * I checked the sha hash and signatures of all release artifacts.
> >
> > Best, Fabian
> >
> > 2018-09-19 11:43 GMT+02:00 Gary Yao :
> >
> > > +1 (non-binding)
> > >
> > > Ran test suite from the flink-jepsen project on AWS EC2 without issues.
> > >
> > > Best,
> > > Gary
> > >
> > > On Sat, Sep 15, 2018 at 11:32 PM, Till Rohrmann 
> > > wrote:
> > >
> > > > Hi everyone,
> > > > Please review and vote on the release candidate #1 for the version
> > 1.6.1,
> > > > as follows:
> > > > [ ] +1, Approve the release
> > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > >
> > > >
> > > > The complete staging area is available for your review, which
> includes:
> > > > * JIRA release notes [1],
> > > > * the official Apache source release and binary convenience releases
> to
> > > be
> > > > deployed to dist.apache.org [2], which are signed with the key
> > > > with fingerprint 1F302569A96CFFD5 [3],
> > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > * source code tag "release-1.6.1-rc1" [5],
> > > > * website pull request listing the new release and adding
> announcement
> > > > blog post [6].
> > > >
> > > > The vote will be open for at least 72 hours. It is adopted by
> majority
> > > > approval, with at least 3 PMC affirmative votes.
> > > >
> > > > [1]
> > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > > > projectId=12315522=12343752
> > > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.6.1/
> > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > [4] https://repository.apache.org/content/repositories/
> > > orgapacheflink-1180
> > > > [5] https://github.com/apache/flink/tree/release-1.6.1-rc1
> > > > [6] https://github.com/apache/flink-web/pull/124
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > Pro-tip: you can create a settings.xml file with these contents:
> > > >
> > > > 
> > > > 
> > > >   flink-1.6.0
> > > > 
> > > > 
> > > >   
> > > > flink-1.6.0
> > > > 
> > > >   
> > > > flink-1.6.0
> > > > 
> > > >
> > > >
> > https://repository.apache.org/content/repositories/orgapacheflink-1180/
> > > > <
> > https://repository.apache.org/content/repositories/orgapacheflink-1178/
> > > >
> > > > 
> > > >   
> > > >   
> > > > archetype
> > > > 
> > > >
> > > >
> > https://repository.apache.org/content/repositories/orgapacheflink-1180/
> > > > <
> > https://repository.apache.org/content/repositories/orgapacheflink-1178/
> > > >
> > > > 
> > > >   
> > > > 
> > > >   
> > > > 
> > > > 
> > > >
> > > > And reference that in your maven commands via --settings
> > > > path/to/settings.xml. This is useful for creating a quickstart based
> on
> > > the
> > > > staged release and for building against the staged jars.
> > > >
> > >
> >
>


Re: [VOTE] Release 1.6.1, release candidate #1

2018-09-20 Thread vino yang
+1

I checked the new Flink version in the  root pom file.
I checked the announcement blog post and make sure the version number is
right.
I checked out the source code and ran mvn package (without test)

Thanks, vino.

Fabian Hueske  于2018年9月20日周四 上午4:54写道:

> +1 binding
>
> * I checked the diffs and did not find any added dependencies or updated
> dependency versions.
> * I checked the sha hash and signatures of all release artifacts.
>
> Best, Fabian
>
> 2018-09-19 11:43 GMT+02:00 Gary Yao :
>
> > +1 (non-binding)
> >
> > Ran test suite from the flink-jepsen project on AWS EC2 without issues.
> >
> > Best,
> > Gary
> >
> > On Sat, Sep 15, 2018 at 11:32 PM, Till Rohrmann 
> > wrote:
> >
> > > Hi everyone,
> > > Please review and vote on the release candidate #1 for the version
> 1.6.1,
> > > as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release and binary convenience releases to
> > be
> > > deployed to dist.apache.org [2], which are signed with the key
> > > with fingerprint 1F302569A96CFFD5 [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag "release-1.6.1-rc1" [5],
> > > * website pull request listing the new release and adding announcement
> > > blog post [6].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > [1]
> > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > > projectId=12315522=12343752
> > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.6.1/
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4] https://repository.apache.org/content/repositories/
> > orgapacheflink-1180
> > > [5] https://github.com/apache/flink/tree/release-1.6.1-rc1
> > > [6] https://github.com/apache/flink-web/pull/124
> > >
> > > Cheers,
> > > Till
> > >
> > > Pro-tip: you can create a settings.xml file with these contents:
> > >
> > > 
> > > 
> > >   flink-1.6.0
> > > 
> > > 
> > >   
> > > flink-1.6.0
> > > 
> > >   
> > > flink-1.6.0
> > > 
> > >
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1180/
> > > <
> https://repository.apache.org/content/repositories/orgapacheflink-1178/
> > >
> > > 
> > >   
> > >   
> > > archetype
> > > 
> > >
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1180/
> > > <
> https://repository.apache.org/content/repositories/orgapacheflink-1178/
> > >
> > > 
> > >   
> > > 
> > >   
> > > 
> > > 
> > >
> > > And reference that in your maven commands via --settings
> > > path/to/settings.xml. This is useful for creating a quickstart based on
> > the
> > > staged release and for building against the staged jars.
> > >
> >
>