[jira] [Created] (FLINK-9829) The wrapper classes should be compared by equals method rather than by symbol of '==' directly in BigDecSerializer.java

2018-07-11 Thread lamber-ken (JIRA)
lamber-ken created FLINK-9829:
-

 Summary: The wrapper classes should be compared by equals method 
rather than by symbol of '==' directly in BigDecSerializer.java
 Key: FLINK-9829
 URL: https://issues.apache.org/jira/browse/FLINK-9829
 Project: Flink
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.5.0
Reporter: lamber-ken
Assignee: lamber-ken
 Fix For: 1.5.2


The wrapper classes should be compared by equals method rather than by symbol 
of '==' directly in BigDecSerializer.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9828) Resource manager should recover slot resource status after failover

2018-07-11 Thread shuai.xu (JIRA)
shuai.xu created FLINK-9828:
---

 Summary: Resource manager should recover slot resource status 
after failover
 Key: FLINK-9828
 URL: https://issues.apache.org/jira/browse/FLINK-9828
 Project: Flink
  Issue Type: Bug
  Components: Cluster Management
Reporter: shuai.xu


After resource manager failover, task executors will report their slot 
allocation status to RM. But the report does not contain resource. So RM only 
know the slot are occupied but can not know how much resource is used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9827) ResourceManager may receive outdate report of slots status from task manager

2018-07-11 Thread shuai.xu (JIRA)
shuai.xu created FLINK-9827:
---

 Summary: ResourceManager may receive outdate report of slots 
status from task manager
 Key: FLINK-9827
 URL: https://issues.apache.org/jira/browse/FLINK-9827
 Project: Flink
  Issue Type: Bug
  Components: Cluster Management
Affects Versions: 1.5.0
Reporter: shuai.xu
Assignee: shuai.xu


TaskExecutor will report its slot status to resource manager in heartbeat, but 
this is in a different thread with the main rpc  thread. So it may happen that 
rm request a slot from task executor but then receive a heartbeat saying the 
slot not assigned. This will cause the slot be freed and assigned again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9826) Implement FLIP-6 YARN Resource Manager for SESSION mode

2018-07-11 Thread shuai.xu (JIRA)
shuai.xu created FLINK-9826:
---

 Summary: Implement FLIP-6 YARN Resource Manager for SESSION mode
 Key: FLINK-9826
 URL: https://issues.apache.org/jira/browse/FLINK-9826
 Project: Flink
  Issue Type: New Feature
  Components: Cluster Management
Reporter: shuai.xu
Assignee: shuai.xu


The Flink YARN Session Resource Manager communicates with YARN's Resource 
Manager to acquire and release containers. It will ask for N containers from 
YARN according to the config。

It is also responsible to notify the JobManager eagerly about container 
failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9825) Upgrade checkstyle version to 8.6

2018-07-11 Thread Ted Yu (JIRA)
Ted Yu created FLINK-9825:
-

 Summary: Upgrade checkstyle version to 8.6
 Key: FLINK-9825
 URL: https://issues.apache.org/jira/browse/FLINK-9825
 Project: Flink
  Issue Type: Improvement
Reporter: Ted Yu


We should upgrade checkstyle version to 8.6+ so that we can use the "match 
violation message to this regex" feature for suppression. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9824) Support IPv6 literal

2018-07-11 Thread Ted Yu (JIRA)
Ted Yu created FLINK-9824:
-

 Summary: Support IPv6 literal
 Key: FLINK-9824
 URL: https://issues.apache.org/jira/browse/FLINK-9824
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu


Currently we use colon as separator when parsing host and port.

We should support the usage of IPv6 literals in parsing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Flink on Mesos - have to manually add host name to /etc/hosts

2018-07-11 Thread vino yang
Hi Alex,

It seems it's a issue about finding host mapping. If you use hostname,
flink will try to find the real IP address, if it can not find the host/ip
mapping,  will throw the exception.

I think the link of the answer is a correct way of fixing this issue.

Thanks.
Vino.

2018-07-11 21:30 GMT+08:00 NEKRASSOV, ALEXEI :

> When I attempted to start Flink 1.4.2 on Mesos - I've ran into issue
> described here: https://stackoverflow.com/questions/45391980/error-
> installing-flink-in-dcos
> This workaround solved the problem: https://stackoverflow.com/a/48184752
>
> Is there something that can be changed in the Flink project, to avoid this
> issue? Or is there a place to document it along with the workaround?
>
> Alex
>


[jira] [Created] (FLINK-9823) Add Kubernetes deployment files for standalone job cluster

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9823:


 Summary: Add Kubernetes deployment files for standalone job cluster
 Key: FLINK-9823
 URL: https://issues.apache.org/jira/browse/FLINK-9823
 Project: Flink
  Issue Type: New Feature
  Components: Kubernetes
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


Similar to FLINK-9822, it would be helpful for the user to have example 
Kubernetes deployment files to start a standalone job cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9822) Add Dockerfile for StandaloneJobClusterEntryPoint image

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9822:


 Summary: Add Dockerfile for StandaloneJobClusterEntryPoint image
 Key: FLINK-9822
 URL: https://issues.apache.org/jira/browse/FLINK-9822
 Project: Flink
  Issue Type: New Feature
  Components: Docker
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


Add a {{Dockerfile}} to create an image which contains the 
{{StandaloneJobClusterEntryPoint}} and a specified user code jar. The 
entrypoint of this image should start the {{StandaloneJobClusterEntryPoint}} 
with the added user code jar. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9821) Let dynamic properties overwrite configuration settings in TaskManagerRunner

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9821:


 Summary: Let dynamic properties overwrite configuration settings 
in TaskManagerRunner
 Key: FLINK-9821
 URL: https://issues.apache.org/jira/browse/FLINK-9821
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


Similar to FLINK-9820 we should also allow dynamic properties to overwrite 
configuration values in the {{TaskManagerRunner}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9820) Let dynamic properties overwrite configuration settings in ClusterEntrypoint

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9820:


 Summary: Let dynamic properties overwrite configuration settings 
in ClusterEntrypoint
 Key: FLINK-9820
 URL: https://issues.apache.org/jira/browse/FLINK-9820
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


The dynamic properties which are passed to the {{ClusterEntrypoint}} should 
overwrite values in the loaded {{Configuration}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9819) Create start up scripts for the StandaloneJobClusterEntryPoint

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9819:


 Summary: Create start up scripts for the 
StandaloneJobClusterEntryPoint
 Key: FLINK-9819
 URL: https://issues.apache.org/jira/browse/FLINK-9819
 Project: Flink
  Issue Type: New Feature
  Components: Startup Shell Scripts
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


In order to start the {{StandaloneJobClusterEntryPoint}} we need start up 
scripts in {{flink-dist}}. We should extend the {{flink-daemon.sh}} and the 
{{flink-console.sh}} scripts to support this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9818) Add cluster component command line parser

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9818:


 Summary: Add cluster component command line parser
 Key: FLINK-9818
 URL: https://issues.apache.org/jira/browse/FLINK-9818
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


In order to parse command line options for the cluster components 
({{TaskManagerRunner}}, {{ClusterEntrypoints}}), we should add a 
{{CommandLineParser}} which supports the common command line options 
({{--configDir}}, {{--webui-port}} and dynamic properties which can override 
configuration values).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9817) Version not correctly bumped to 5.0

2018-07-11 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-9817:
--

 Summary: Version not correctly bumped to 5.0
 Key: FLINK-9817
 URL: https://issues.apache.org/jira/browse/FLINK-9817
 Project: Flink
  Issue Type: Bug
  Components: flink-shaded.git
Reporter: Nico Kruber
Assignee: Nico Kruber


Current {{master}} of {{flink-shaded}} only made half of the changes needed 
when updating version numbers: the suffix in each sub-module's version number 
also needs to be adapted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9816) Support Netty SslEngine based on openSSL

2018-07-11 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-9816:
--

 Summary: Support Netty SslEngine based on openSSL
 Key: FLINK-9816
 URL: https://issues.apache.org/jira/browse/FLINK-9816
 Project: Flink
  Issue Type: Improvement
  Components: Network
Reporter: Nico Kruber
Assignee: Nico Kruber


Since a while now, Netty does not only support the JDK's {{SSLEngine}} but also 
implements one based on openSSL which, according to 
https://netty.io/wiki/requirements-for-4.x.html#wiki-h4-4 is significantly 
faster. We should add support for using that engine instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release 1.5.1, release candidate #3

2018-07-11 Thread Yaz Sh
+1

- Verified the signatures for all binary artifacts
- Verifies Checksum for all binary packages
- Ran local cluster with no error on logs and empty *.out
- Stop local cluster  with no error on logs
- Ran multiple batch and streaming example via WebUI
- Rab multiple batch and streaming examples via CLI
- Increase number of task managers and ran examples with Parallelism > 1
- Ran WebUI on multiple browsers
- Check Example folder for all binary packages

Just an observation:
When ran a job with Parallelism > Available task slots intermediately job stay 
in “Running" status for a very long time and neither finish nor throw any 
errors.

Please check if someone else can reproduce it.

Cheers,
Yazdan


> On Jul 11, 2018, at 11:21 AM, Till Rohrmann  wrote:
> 
> +1 (binding)
> 
> - Verified the signatures of all binary artifacts
> - Verified that no new dependencies were added for which the LICENSE and 
> NOTICE files need to be adapted.
> - Build 1.5.1 from the source artifact
> - Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7 binary 
> artifact
> - Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary artifact
> 
> Cheers,
> Till
> 
> 
> On Wed, Jul 11, 2018 at 9:49 AM Chesnay Schepler  > wrote:
> Correction on my part, it does affect all packages.
> 
> I've also found the cause. To speed up the process I only built modules 
> that flink-dist depends on (see FLINK-9768). However flink-dist depends 
> on neither flink-examples-batch nor flink-examples-streaming, yet 
> happily accesses their target directory. The existing build process only 
> worked since _by chance_ these 2 modules are built before flink-dist 
> when doing a complete build.
> 
> I will rebuild the binaries (I don't think we have to cancel the RC for 
> this) and open a JIRA to fix the dependencies.
> 
> On 11.07.2018 09:27, Chesnay Schepler wrote:
> > oh, the packages that include hadoop are really missing it...
> >
> > On 11.07.2018 09:25, Chesnay Schepler wrote:
> >> @Yaz which binary package did you check? I looked into the 
> >> hadoop-free package and the folder is there.
> >>
> >> Did you maybe encounter an error when extracting the package?
> >>
> >> On 11.07.2018 05:44, Yaz Sh wrote:
> >>> -1
> >>>
> >>> ./examples/streaming folder is missing in binary packages
> >>>
> >>>
> >>> Cheers,
> >>> Yazdan
> >>>
>  On Jul 10, 2018, at 9:57 PM, vino yang   > wrote:
> 
>  +1
>  reviewed [1], [4] and [6]
> 
>  2018-07-11 3:10 GMT+08:00 Chesnay Schepler   >:
> 
> > Hi everyone,
> > Please review and vote on the release candidate #3 for the version 
> > 1.5.1,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which 
> > includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience 
> > releases to be
> > deployed to dist.apache.org  [2], which are 
> > signed with the key with
> > fingerprint 11D464BA [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-1.5.1-rc3" [5],
> > * website pull request listing the new release and adding 
> > announcement
> > blog post [6].
> >
> > This RC is a slightly modified version of the previous RC, with most
> > release testing being applicable to both release candidates. The 
> > minimum
> > voting duration will hence be reduced to 24 hours. It is adopted by
> > majority approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Chesnay
> >
> > [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje 
> > 
> > ctId=12315522&version=12343053
> > [2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/ 
> > 
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS 
> > 
> > [4] 
> > https://repository.apache.org/content/repositories/orgapacheflink-1171 
> > 
> >  
> >
> > [5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h= 
> > 
> > refs/tags/release-1.5.1-rc3
> > [6] https://github.com/apache/flink-web/pull/112 
> > 
> >
> >
> >
> >
> >
> >>>
> >>
> >>
> >
> >
> 



[jira] [Created] (FLINK-9815) YARNSessionCapacitySchedulerITCase flaky

2018-07-11 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9815:
---

 Summary: YARNSessionCapacitySchedulerITCase flaky
 Key: FLINK-9815
 URL: https://issues.apache.org/jira/browse/FLINK-9815
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.6.0
Reporter: Dawid Wysakowicz


The test fails because of dangling yarn applications.

Logs: https://api.travis-ci.org/v3/job/402657694/log.txt

It was also reported previously in [FLINK-8161]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9814) CsvTableSource lack of column warning

2018-07-11 Thread JIRA
François Lacombe created FLINK-9814:
---

 Summary: CsvTableSource lack of column warning
 Key: FLINK-9814
 URL: https://issues.apache.org/jira/browse/FLINK-9814
 Project: Flink
  Issue Type: Wish
  Components: Table API & SQL
Affects Versions: 1.5.0
Reporter: François Lacombe


The CsvTableSource class is built by defining expected columns to be find in 
the corresponding csv file.

 

It would be great to throw an Exception when the csv file doesn't have the same 
structure as defined in the source.

It can be easilly checked with file header if it exists.

Is this possible ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9813) Build xTableSource from Avro schemas

2018-07-11 Thread JIRA
François Lacombe created FLINK-9813:
---

 Summary: Build xTableSource from Avro schemas
 Key: FLINK-9813
 URL: https://issues.apache.org/jira/browse/FLINK-9813
 Project: Flink
  Issue Type: Wish
  Components: Table API & SQL
Affects Versions: 1.5.0
Reporter: François Lacombe


As Avro provide efficient data schemas formalism, it may be great to be able to 
build Flink Tables Sources with such files.

More info about Avro schemas 
:[https://avro.apache.org/docs/1.8.1/spec.html#schemas]

For instance, with CsvTableSource :

Parser schemaParser = new Schema.Parser();

Schema tableSchema = schemaParser.parse("avro.json");

Builder bld = CsvTableSource.builder().schema(tableSchema);

 

This would give me a fully available CsvTableSource with columns defined in 
avro.json

It may be possible to do so for every TableSources since avro format is really 
common and versatile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9812) SpanningRecordSerializationTest fails on travis

2018-07-11 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9812:
---

 Summary: SpanningRecordSerializationTest fails on travis
 Key: FLINK-9812
 URL: https://issues.apache.org/jira/browse/FLINK-9812
 Project: Flink
  Issue Type: Bug
  Components: Network, Type Serialization System
Affects Versions: 1.6.0
Reporter: Chesnay Schepler


https://travis-ci.org/zentol/flink/jobs/402744191

{code}
testHandleMixedLargeRecords(org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializationTest)
  Time elapsed: 6.113 sec  <<< ERROR!
java.nio.channels.ClosedChannelException: null
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:199)
at 
org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer$SpanningWrapper.addNextChunkFromMemorySegment(SpillingAdaptiveSpanningRecordDeserializer.java:529)
at 
org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer$SpanningWrapper.access$200(SpillingAdaptiveSpanningRecordDeserializer.java:431)
at 
org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.setNextBuffer(SpillingAdaptiveSpanningRecordDeserializer.java:76)
at 
org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializationTest.testSerializationRoundTrip(SpanningRecordSerializationTest.java:149)
at 
org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializationTest.testSerializationRoundTrip(SpanningRecordSerializationTest.java:115)
at 
org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializationTest.testHandleMixedLargeRecords(SpanningRecordSerializationTest.java:104)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9811) Add ITCase for interactions of Jar handlers

2018-07-11 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9811:
---

 Summary: Add ITCase for interactions of Jar handlers
 Key: FLINK-9811
 URL: https://issues.apache.org/jira/browse/FLINK-9811
 Project: Flink
  Issue Type: Improvement
  Components: Job-Submission, REST, Webfrontend
Affects Versions: 1.6.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.6.0


We have a number of jar handlers with varying responsibilities: accepting jar 
uploads, listing uploaded jars, running jars etc. .
These handlers may rely on the behavior of other handlers; for example they 
might expect a specific naming scheme.
We should add a test to ensure that a common life-cycle for a jar (upload -> 
list -> show plan -> run -> delete) doesn't cause problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9810) JarListHandler does not close opened jars

2018-07-11 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9810:
---

 Summary: JarListHandler does not close opened jars
 Key: FLINK-9810
 URL: https://issues.apache.org/jira/browse/FLINK-9810
 Project: Flink
  Issue Type: Bug
  Components: REST, Webfrontend
Affects Versions: 1.4.2, 1.5.0, 1.6.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.3, 1.5.1, 1.6.0


{code}
try {
JarFile jar = new JarFile(f);
Manifest manifest = jar.getManifest();
String assemblerClass = null;

if (manifest != null) {
assemblerClass = 
manifest.getMainAttributes().getValue(PackagedProgram.MANIFEST_ATTRIBUTE_ASSEMBLER_CLASS);
if (assemblerClass == null) {
assemblerClass = 
manifest.getMainAttributes().getValue(PackagedProgram.MANIFEST_ATTRIBUTE_MAIN_CLASS);
}
}
if (assemblerClass != null) {
classes = assemblerClass.split(",");
}
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9809) Support setting CoLocation constraints on the DataStream Transformations

2018-07-11 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-9809:
---

 Summary: Support setting CoLocation constraints on the DataStream 
Transformations
 Key: FLINK-9809
 URL: https://issues.apache.org/jira/browse/FLINK-9809
 Project: Flink
  Issue Type: Improvement
  Components: Streaming
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.6.0


Flink supports co location constraints for operator placement during 
scheduling. This is used internally for iterations, for example, but is not 
exposed to users.

I propose to add a way for expert users to set these constraints. As a first 
step, I would add them to the {{StreamTransformation}}, which is not part of 
the public user-facing classes, but a more internal class in the DataStream 
API. That way we make this initially a hidden feature and can gradually expose 
it more prominently when we agree that this would be a good idea.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release 1.5.1, release candidate #3

2018-07-11 Thread Till Rohrmann
+1 (binding)

- Verified the signatures of all binary artifacts
- Verified that no new dependencies were added for which the LICENSE and
NOTICE files need to be adapted.
- Build 1.5.1 from the source artifact
- Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7 binary
artifact
- Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary artifact

Cheers,
Till


On Wed, Jul 11, 2018 at 9:49 AM Chesnay Schepler  wrote:

> Correction on my part, it does affect all packages.
>
> I've also found the cause. To speed up the process I only built modules
> that flink-dist depends on (see FLINK-9768). However flink-dist depends
> on neither flink-examples-batch nor flink-examples-streaming, yet
> happily accesses their target directory. The existing build process only
> worked since _by chance_ these 2 modules are built before flink-dist
> when doing a complete build.
>
> I will rebuild the binaries (I don't think we have to cancel the RC for
> this) and open a JIRA to fix the dependencies.
>
> On 11.07.2018 09:27, Chesnay Schepler wrote:
> > oh, the packages that include hadoop are really missing it...
> >
> > On 11.07.2018 09:25, Chesnay Schepler wrote:
> >> @Yaz which binary package did you check? I looked into the
> >> hadoop-free package and the folder is there.
> >>
> >> Did you maybe encounter an error when extracting the package?
> >>
> >> On 11.07.2018 05:44, Yaz Sh wrote:
> >>> -1
> >>>
> >>> ./examples/streaming folder is missing in binary packages
> >>>
> >>>
> >>> Cheers,
> >>> Yazdan
> >>>
>  On Jul 10, 2018, at 9:57 PM, vino yang  wrote:
> 
>  +1
>  reviewed [1], [4] and [6]
> 
>  2018-07-11 3:10 GMT+08:00 Chesnay Schepler :
> 
> > Hi everyone,
> > Please review and vote on the release candidate #3 for the version
> > 1.5.1,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which
> > includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience
> > releases to be
> > deployed to dist.apache.org [2], which are signed with the key with
> > fingerprint 11D464BA [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-1.5.1-rc3" [5],
> > * website pull request listing the new release and adding
> > announcement
> > blog post [6].
> >
> > This RC is a slightly modified version of the previous RC, with most
> > release testing being applicable to both release candidates. The
> > minimum
> > voting duration will hence be reduced to 24 hours. It is adopted by
> > majority approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Chesnay
> >
> > [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje
> > ctId=12315522&version=12343053
> > [2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> >
> https://repository.apache.org/content/repositories/orgapacheflink-1171
> >
> > [5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
> > refs/tags/release-1.5.1-rc3
> > [6] https://github.com/apache/flink-web/pull/112
> >
> >
> >
> >
> >
> >>>
> >>
> >>
> >
> >
>
>


Re: [VOTE] Release 1.5.1, release candidate #3

2018-07-11 Thread Chesnay Schepler

+1

* verified that the binary packages contained streaming examples
* started local cluster without exceptions in the logs
* ran multiple streaming examples

On 11.07.2018 16:51, Chesnay Schepler wrote:
I have rebuilt the binary release packages, they should now contain 
all examples.


On 11.07.2018 09:49, Chesnay Schepler wrote:

Correction on my part, it does affect all packages.

I've also found the cause. To speed up the process I only built 
modules that flink-dist depends on (see FLINK-9768). However 
flink-dist depends on neither flink-examples-batch nor 
flink-examples-streaming, yet happily accesses their target 
directory. The existing build process only worked since _by chance_ 
these 2 modules are built before flink-dist when doing a complete build.


I will rebuild the binaries (I don't think we have to cancel the RC 
for this) and open a JIRA to fix the dependencies.


On 11.07.2018 09:27, Chesnay Schepler wrote:

oh, the packages that include hadoop are really missing it...

On 11.07.2018 09:25, Chesnay Schepler wrote:
@Yaz which binary package did you check? I looked into the 
hadoop-free package and the folder is there.


Did you maybe encounter an error when extracting the package?

On 11.07.2018 05:44, Yaz Sh wrote:

-1

./examples/streaming folder is missing in binary packages


Cheers,
Yazdan

On Jul 10, 2018, at 9:57 PM, vino yang  
wrote:


+1
reviewed [1], [4] and [6]

2018-07-11 3:10 GMT+08:00 Chesnay Schepler :


Hi everyone,
Please review and vote on the release candidate #3 for the 
version 1.5.1,

as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific 
comments)



The complete staging area is available for your review, which 
includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience 
releases to be

deployed to dist.apache.org [2], which are signed with the key with
fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc3" [5],
* website pull request listing the new release and adding 
announcement

blog post [6].

This RC is a slightly modified version of the previous RC, with 
most
release testing being applicable to both release candidates. The 
minimum

voting duration will hence be reduced to 24 hours. It is adopted by
majority approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje
ctId=12315522&version=12343053
[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] 
https://repository.apache.org/content/repositories/orgapacheflink-1171 


[5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
refs/tags/release-1.5.1-rc3
[6] https://github.com/apache/flink-web/pull/112























Re: flink JPS result changes

2018-07-11 Thread Chesnay Schepler
Generally speaking no, the DIspatcher (here called 
StandaloneSessionClusterEntrypoint) will spawn a jobmanager internally 
when a job is submitted


On 11.07.2018 16:42, Will Du wrote:

In this case, do i need to add a jobManager
On Jul 11, 2018, at 10:14 AM, miki haiat > wrote:


Flink 6 changed  the execution model compactly
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077 


https://docs.google.com/document/d/1zwBP3LKnJ0LI1R4-9hLXBVjOXAkQS5jKrGYYoPz-xRk/edit#heading=h.giuwq6q8d23j



On Wed, Jul 11, 2018 at 5:09 PM Will Du > wrote:


Hi folks
Do we have any information about the process changes after
v1.5.0? I used to see jobManager and TaskManager process once the
start-cluster.sh is being called. But, it shows below in v1.5.0
once started. Everything works, but no idea where is the jobManager.

$jps
2523 TaskManagerRunner
2190 StandaloneSessionClusterEntrypoint

thanks,
Will







Re: [VOTE] Release 1.5.1, release candidate #3

2018-07-11 Thread Chesnay Schepler
I have rebuilt the binary release packages, they should now contain all 
examples.


On 11.07.2018 09:49, Chesnay Schepler wrote:

Correction on my part, it does affect all packages.

I've also found the cause. To speed up the process I only built 
modules that flink-dist depends on (see FLINK-9768). However 
flink-dist depends on neither flink-examples-batch nor 
flink-examples-streaming, yet happily accesses their target directory. 
The existing build process only worked since _by chance_ these 2 
modules are built before flink-dist when doing a complete build.


I will rebuild the binaries (I don't think we have to cancel the RC 
for this) and open a JIRA to fix the dependencies.


On 11.07.2018 09:27, Chesnay Schepler wrote:

oh, the packages that include hadoop are really missing it...

On 11.07.2018 09:25, Chesnay Schepler wrote:
@Yaz which binary package did you check? I looked into the 
hadoop-free package and the folder is there.


Did you maybe encounter an error when extracting the package?

On 11.07.2018 05:44, Yaz Sh wrote:

-1

./examples/streaming folder is missing in binary packages


Cheers,
Yazdan


On Jul 10, 2018, at 9:57 PM, vino yang  wrote:

+1
reviewed [1], [4] and [6]

2018-07-11 3:10 GMT+08:00 Chesnay Schepler :


Hi everyone,
Please review and vote on the release candidate #3 for the 
version 1.5.1,

as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific 
comments)



The complete staging area is available for your review, which 
includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience 
releases to be

deployed to dist.apache.org [2], which are signed with the key with
fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc3" [5],
* website pull request listing the new release and adding 
announcement

blog post [6].

This RC is a slightly modified version of the previous RC, with most
release testing being applicable to both release candidates. The 
minimum

voting duration will hence be reduced to 24 hours. It is adopted by
majority approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje
ctId=12315522&version=12343053
[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] 
https://repository.apache.org/content/repositories/orgapacheflink-1171 


[5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
refs/tags/release-1.5.1-rc3
[6] https://github.com/apache/flink-web/pull/112




















[jira] [Created] (FLINK-9808) Implement state conversion procedure in state backends

2018-07-11 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-9808:
--

 Summary: Implement state conversion procedure in state backends
 Key: FLINK-9808
 URL: https://issues.apache.org/jira/browse/FLINK-9808
 Project: Flink
  Issue Type: Sub-task
  Components: State Backends, Checkpointing
Reporter: Tzu-Li (Gordon) Tai
Assignee: Aljoscha Krettek
 Fix For: 1.6.0


With FLINK-9377 in place and that config snapshots serve as the single source 
of truth for recreating restore serializers, the next step would be to utilize 
this when performing a full-pass state conversion (i.e., read with old / 
restore serializer, write with new serializer).

For Flink's heap-based backends, it can be seen that state conversion 
inherently happens, since all state is always deserialized after restore with 
the restore serializer, and written with the new serializer on snapshots.

For the RocksDB state backend, since state is lazily deserialized, state 
conversion needs to happen for per-registered state on their first access if 
the registered new serializer has a different serialization schema than the 
previous serializer.

This task should consist of three parts:

1. Allow {{CompatibilityResult}} to correctly distinguish between whether the 
new serializer's schema is a) compatible with the serializer as it is, b) 
compatible after the serializer has been reconfigured, or c) incompatible.

2. Introduce state conversion procedures in the RocksDB state backend. This 
should occur on the first state access.

3. Make sure that all other backends no longer do redundant serializer 
compatibility checks. That is not required because those backends always 
perform full-pass state conversions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: flink JPS result changes

2018-07-11 Thread miki haiat
Flink 6 changed  the execution model compactly
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077
https://docs.google.com/document/d/1zwBP3LKnJ0LI1R4-9hLXBVjOXAkQS5jKrGYYoPz-xRk/edit#heading=h.giuwq6q8d23j



On Wed, Jul 11, 2018 at 5:09 PM Will Du  wrote:

> Hi folks
> Do we have any information about the process changes after v1.5.0? I used
> to see jobManager and TaskManager process once the start-cluster.sh is
> being called. But, it shows below in v1.5.0 once started. Everything works,
> but no idea where is the jobManager.
>
> $jps
> 2523 TaskManagerRunner
> 2190 StandaloneSessionClusterEntrypoint
>
> thanks,
> Will


Flink on Mesos - have to manually add host name to /etc/hosts

2018-07-11 Thread NEKRASSOV, ALEXEI
When I attempted to start Flink 1.4.2 on Mesos - I've ran into issue described 
here: 
https://stackoverflow.com/questions/45391980/error-installing-flink-in-dcos
This workaround solved the problem: https://stackoverflow.com/a/48184752

Is there something that can be changed in the Flink project, to avoid this 
issue? Or is there a place to document it along with the workaround?

Alex


Re: Confusions About JDBCOutputFormat

2018-07-11 Thread Hequn Cheng
Cool. I will take a look. Thanks.

On Wed, Jul 11, 2018 at 7:08 PM, wangsan  wrote:

> Well, I see. If the connection is established when writing data into DB,
> we need to cache received rows since last write.
>
> IMO, maybe we do not need to open connections repeatedly or introduce
> connection pools. Test and refresh the connection periodically can simply
> solve this problem. I’ve implemented this at https://github.com/apache/
> flink/pull/6301, It would be kind of you to review this.
>
> Best,
> wangsan
>
>
>
> On Jul 11, 2018, at 2:25 PM, Hequn Cheng  wrote:
>
> Hi wangsan,
>
> What I mean is establishing a connection each time write data into JDBC,
> i.e.  establish a connection in flush() function. I think this will make
> sure the connection is ok. What do you think?
>
> On Wed, Jul 11, 2018 at 12:12 AM, wangsan  wrote:
>
> Hi Hequn,
>
> Establishing a connection for each batch write may also have idle
> connection problem, since we are not sure when the connection will be
> closed. We call flush() method when a batch is finished or  snapshot state,
> but what if the snapshot is not enabled and the batch size not reached
> before the connection is closed?
>
> May be we could use a Timer to test the connection periodically and keep
> it alive. What do you think?
>
> I will open a jira and try to work on that issue.
>
> Best,
> wangsan
>
>
>
> On Jul 10, 2018, at 8:38 PM, Hequn Cheng  wrote:
>
> Hi wangsan,
>
> I agree with you. It would be kind of you to open a jira to check the
> problem.
>
> For the first problem, I think we need to establish connection each time
> execute batch write. And, it is better to get the connection from a
> connection pool.
> For the second problem, to avoid multithread problem, I think we should
> synchronized the batch object in flush() method.
>
> What do you think?
>
> Best, Hequn
>
>
>
> On Tue, Jul 10, 2018 at 2:36 PM, wangsan  wrote:
>
> Hi all,
>
> I'm going to use JDBCAppendTableSink and JDBCOutputFormat in my Flink
> application. But I am confused with the implementation of JDBCOutputFormat.
>
> 1. The Connection was established when JDBCOutputFormat is opened, and
> will be used all the time. But if this connction lies idle for a long time,
> the database will force close the connetion, thus errors may occur.
> 2. The flush() method is called when batchCount exceeds the threshold,
> but it is also called while snapshotting state. So two threads may modify
> upload and batchCount, but without synchronization.
>
> Please correct me if I am wrong.
>
> ——
> wangsan
>
>
>


[jira] [Created] (FLINK-9807) Improve EventTimeWindowCheckpointITCase&LocalRecoveryITCase with parameterized

2018-07-11 Thread Congxian Qiu (JIRA)
Congxian Qiu created FLINK-9807:
---

 Summary: Improve 
EventTimeWindowCheckpointITCase&LocalRecoveryITCase with parameterized
 Key: FLINK-9807
 URL: https://issues.apache.org/jira/browse/FLINK-9807
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.5.0
Reporter: Congxian Qiu


Now, the `AbastractEventTimeWIndowCheckpointITCase` and 
`AbstractLocalRecoveryITCase` need to re-implement for every backend, we can 
improve this by using JUnit parameterized



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9806) Add a canonical link element to documentation HTML

2018-07-11 Thread Patrick Lucas (JIRA)
Patrick Lucas created FLINK-9806:


 Summary: Add a canonical link element to documentation HTML
 Key: FLINK-9806
 URL: https://issues.apache.org/jira/browse/FLINK-9806
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.5.0
Reporter: Patrick Lucas


Flink has suffered for a while with non-optimal SEO for its documentation, 
meaning a web search for a topic covered in the documentation often produces 
results for many versions of Flink, even preferring older versions since those 
pages have been around for longer.

Using a canonical link element (see references) may alleviate this by informing 
search engines about where to find the latest documentation (i.e. pages hosted 
under [https://ci.apache.org/projects/flink/flink-docs-master/).]

I think this is at least worth experimenting with, and if it doesn't cause 
problems, even backporting it to the older release branches to eventually clean 
up the Flink docs' SEO and converge on advertising only the latest docs (unless 
a specific version is specified).

References:
 * [https://moz.com/learn/seo/canonicalization]
 * [https://yoast.com/rel-canonical/]
 * [https://support.google.com/webmasters/answer/139066?hl=en]
 * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9805) HTTP Redirect to Active JM in Flink CLI

2018-07-11 Thread Sayat Satybaldiyev (JIRA)
Sayat Satybaldiyev created FLINK-9805:
-

 Summary: HTTP Redirect to Active JM in Flink CLI
 Key: FLINK-9805
 URL: https://issues.apache.org/jira/browse/FLINK-9805
 Project: Flink
  Issue Type: Improvement
  Components: Client
Affects Versions: 1.5.0
Reporter: Sayat Satybaldiyev


Flink CLI allows specifying job manager address via --jobmanager flag. However, 
in HA mode the JM can change and then standby JM does HTTP redirect to the 
active one. However, during deployment via flink CLI with --jobmanager flag 
option the CLI does not redirect to the active one. Thus fails to submit job 
with "Could not complete the operation. Number of retries has been exhausted" 

 

*Proposal:*

Honor JM HTTP redirect in case leadership changes in flink CLI with 
--jobmanager flag active. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9804) KeyedStateBackend.getKeys() does not work on RocksDB MapState

2018-07-11 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-9804:
---

 Summary: KeyedStateBackend.getKeys() does not work on RocksDB 
MapState
 Key: FLINK-9804
 URL: https://issues.apache.org/jira/browse/FLINK-9804
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Affects Versions: 1.5.0, 1.5.1
Reporter: Aljoscha Krettek
 Fix For: 1.5.2, 1.6.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Confusions About JDBCOutputFormat

2018-07-11 Thread wangsan
Well, I see. If the connection is established when writing data into DB, we 
need to cache received rows since last write. 

IMO, maybe we do not need to open connections repeatedly or introduce 
connection pools. Test and refresh the connection periodically can simply solve 
this problem. I’ve implemented this at 
https://github.com/apache/flink/pull/6301 
, It would be kind of you to review 
this.

Best,
wangsan


> On Jul 11, 2018, at 2:25 PM, Hequn Cheng  wrote:
> 
> Hi wangsan,
> 
> What I mean is establishing a connection each time write data into JDBC,
> i.e.  establish a connection in flush() function. I think this will make
> sure the connection is ok. What do you think?
> 
> On Wed, Jul 11, 2018 at 12:12 AM, wangsan  > wrote:
> 
>> Hi Hequn,
>> 
>> Establishing a connection for each batch write may also have idle
>> connection problem, since we are not sure when the connection will be
>> closed. We call flush() method when a batch is finished or  snapshot state,
>> but what if the snapshot is not enabled and the batch size not reached
>> before the connection is closed?
>> 
>> May be we could use a Timer to test the connection periodically and keep
>> it alive. What do you think?
>> 
>> I will open a jira and try to work on that issue.
>> 
>> Best,
>> wangsan
>> 
>> 
>> 
>> On Jul 10, 2018, at 8:38 PM, Hequn Cheng  wrote:
>> 
>> Hi wangsan,
>> 
>> I agree with you. It would be kind of you to open a jira to check the
>> problem.
>> 
>> For the first problem, I think we need to establish connection each time
>> execute batch write. And, it is better to get the connection from a
>> connection pool.
>> For the second problem, to avoid multithread problem, I think we should
>> synchronized the batch object in flush() method.
>> 
>> What do you think?
>> 
>> Best, Hequn
>> 
>> 
>> 
>> On Tue, Jul 10, 2018 at 2:36 PM, wangsan > > wrote:
>> 
>>> Hi all,
>>> 
>>> I'm going to use JDBCAppendTableSink and JDBCOutputFormat in my Flink
>>> application. But I am confused with the implementation of JDBCOutputFormat.
>>> 
>>> 1. The Connection was established when JDBCOutputFormat is opened, and
>>> will be used all the time. But if this connction lies idle for a long time,
>>> the database will force close the connetion, thus errors may occur.
>>> 2. The flush() method is called when batchCount exceeds the threshold,
>>> but it is also called while snapshotting state. So two threads may modify
>>> upload and batchCount, but without synchronization.
>>> 
>>> Please correct me if I am wrong.
>>> 
>>> ——
>>> wangsan



[jira] [Created] (FLINK-9803) Drop canEqual() from TypeInformation, TypeSerializer, etc.

2018-07-11 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-9803:
---

 Summary: Drop canEqual() from TypeInformation, TypeSerializer, etc.
 Key: FLINK-9803
 URL: https://issues.apache.org/jira/browse/FLINK-9803
 Project: Flink
  Issue Type: Sub-task
  Components: Core, Type Serialization System
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek
 Fix For: 1.6.0


See discussion from 
https://lists.apache.org/thread.html/7cc6cfd66e96e8d33c768629b55481b6c951c68128f10256abb328fe@%3Cdev.flink.apache.org%3E

{quote}
Hi all!

As part of an attempt to simplify some code in the TypeInfo and
TypeSerializer area, I would like to drop the "canEqual" methods for the
following reason:

"canEqual()" is necessary to make proper equality checks across hierarchies
of types. This is for example useful in a collection API, stating for
example whether a List can be equal to a Collection if they have the same
contents. We don't have that here.

A certain type information (and serializer) is equal to another one if they
describe the same type, strictly. There is no necessity for cross hierarchy
checks.

This has also let to the situation that most type infos and serializers
implement just a dummy/default version of "canEqual". Many "equals()"
methods do not even call the other object's "canEqual", etc.

As a first step, we could simply deprecate the method and implement an
empty default, and remove all calls to that method.
Best,
Stephan
{quote}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Register

2018-07-11 Thread Chesnay Schepler
To subscribe to the dev list you have to send a mail to 
dev-subscr...@flink.apache.org


On 11.07.2018 09:17, 陈梓立 wrote:

Register





[jira] [Created] (FLINK-9802) Harden End-to-end tests against download failures

2018-07-11 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9802:
---

 Summary: Harden End-to-end tests against download failures
 Key: FLINK-9802
 URL: https://issues.apache.org/jira/browse/FLINK-9802
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.5.0, 1.6.0
Reporter: Chesnay Schepler


Several end-to-end tests download libraries (kafka, zookeeper, elasticsearch) 
to set them up locally for testing purposes. Currently, (at least for the 
elasticsearch test), we do not guard against failed downloads.

We should do a sweep over all tests and harden them against download failures, 
by retrying the download on failure, and explicitly exiting if the download did 
not succeed after N attempts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9801) flink-dist is missing dependency on flink-examples

2018-07-11 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9801:
---

 Summary: flink-dist is missing dependency on flink-examples
 Key: FLINK-9801
 URL: https://issues.apache.org/jira/browse/FLINK-9801
 Project: Flink
  Issue Type: Improvement
  Components: Build System, Examples
Affects Versions: 1.5.1, 1.6.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.5.1, 1.6.0


For the assembly of {{flink-dist}} we copy various batch/streaming examples 
directly from the respective /target directory.
Never mind that this is already a problem as is (see FLINK-9582), 
{{flink-dist}} defines no dependency on these modules.
If you were to only compile {{flink-dist}} with the {{-am}} flag (to also build 
all dependencies) it thus _may_ or _may not_ happen that these modules are 
actually compiled, which could cause these examples to not be included in the 
final assembly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9800) Return the relation between a running job and the original jar in REST API

2018-07-11 Thread Lasse Nedergaard (JIRA)
 Lasse Nedergaard created FLINK-9800:


 Summary: Return the relation between a running job and the 
original jar in REST API
 Key: FLINK-9800
 URL: https://issues.apache.org/jira/browse/FLINK-9800
 Project: Flink
  Issue Type: Improvement
  Components: REST
Reporter:  Lasse Nedergaard


Extend the REST endpoint /jobs to include the jarid used to start the job.

With this information it would be possible to find the original jar and thereby 
information about the source for the job.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9799) Generalize/unify state meta info

2018-07-11 Thread Stefan Richter (JIRA)
Stefan Richter created FLINK-9799:
-

 Summary: Generalize/unify state meta info
 Key: FLINK-9799
 URL: https://issues.apache.org/jira/browse/FLINK-9799
 Project: Flink
  Issue Type: Sub-task
  Components: State Backends, Checkpointing
Affects Versions: 1.5.0
Reporter: Stefan Richter
Assignee: Stefan Richter


Flink currently has a couple of classes that describe the meta data of state 
(e.g. for keyed state, operator state, broadcast state, ...) and they typically 
come with their own serialization proxy and backwards compatibility story. 
However, the differences between those meta data classes are very small, like 
different option flags and a different set of serializers. Before introducing 
yet another meta data for timers, we should unify them in a general state meta 
data class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Drop "canEqual" from TypeInformation, TypeSerializer, etc.

2018-07-11 Thread Aljoscha Krettek
I created https://issues.apache.org/jira/browse/FLINK-9798

> On 18. May 2018, at 16:10, Till Rohrmann  wrote:
> 
> +1
> 
> If we don't have a hierarchy of serializers then this method has no right
> to exist.
> 
> Cheers,
> Till
> 
> On Wed, May 16, 2018 at 11:06 AM, Timo Walther  wrote:
> 
>> +1
>> 
>> TypeInformation has too many methods that need to be implemented but
>> provide little benefit for Flink.
>> 
>> Am 16.05.18 um 10:55 schrieb Ted Yu:
>> 
>> +1 from me as well.
>>> 
>>> I checked a few serializer classes. The `equals` method on serializers
>>> contains the logic of `canEqual` method whose existence seems redundant.
>>> 
>>> On Wed, May 16, 2018 at 1:49 AM, Tzu-Li (Gordon) Tai >>> 
>>> wrote:
>>> 
>>> +1.
 
 Looking at the implementations of the `canEqual` method in several
 serializers, it seems like all that is done is a check whether the object
 is of the same serializer class.
 We’ll have to be careful and double check all `equals` method on
 serializers that may have relied on the `canEqual` method to perform the
 preliminary type check.
 Otherwise, this sounds good.
 
 On 16 May 2018 at 4:35:47 PM, Stephan Ewen (se...@apache.org) wrote:
 
 Hi all!
 
 As part of an attempt to simplify some code in the TypeInfo and
 TypeSerializer area, I would like to drop the "canEqual" methods for the
 following reason:
 
 "canEqual()" is necessary to make proper equality checks across
 hierarchies
 of types. This is for example useful in a collection API, stating for
 example whether a List can be equal to a Collection if they have the same
 contents. We don't have that here.
 
 A certain type information (and serializer) is equal to another one if
 they
 describe the same type, strictly. There is no necessity for cross
 hierarchy
 checks.
 
 This has also let to the situation that most type infos and serializers
 implement just a dummy/default version of "canEqual". Many "equals()"
 methods do not even call the other object's "canEqual", etc.
 
 As a first step, we could simply deprecate the method and implement an
 empty default, and remove all calls to that method.
 
 Best,
 Stephan
 
 
>> 



[jira] [Created] (FLINK-9798) Drop canEqual() from TypeInformation, TypeSerializer, etc.

2018-07-11 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-9798:
---

 Summary: Drop canEqual() from TypeInformation, TypeSerializer, etc.
 Key: FLINK-9798
 URL: https://issues.apache.org/jira/browse/FLINK-9798
 Project: Flink
  Issue Type: Improvement
  Components: Core, Type Serialization System
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek
 Fix For: 1.6.0


See discussion from 
https://lists.apache.org/thread.html/7cc6cfd66e96e8d33c768629b55481b6c951c68128f10256abb328fe@%3Cdev.flink.apache.org%3E

{quote}
Hi all!

As part of an attempt to simplify some code in the TypeInfo and
TypeSerializer area, I would like to drop the "canEqual" methods for the
following reason:

"canEqual()" is necessary to make proper equality checks across hierarchies
of types. This is for example useful in a collection API, stating for
example whether a List can be equal to a Collection if they have the same
contents. We don't have that here.

A certain type information (and serializer) is equal to another one if they
describe the same type, strictly. There is no necessity for cross hierarchy
checks.

This has also let to the situation that most type infos and serializers
implement just a dummy/default version of "canEqual". Many "equals()"
methods do not even call the other object's "canEqual", etc.

As a first step, we could simply deprecate the method and implement an
empty default, and remove all calls to that method.
Best,
Stephan
{quote}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9797) Separate state serializers from network serializers

2018-07-11 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-9797:
--

 Summary: Separate state serializers from network serializers
 Key: FLINK-9797
 URL: https://issues.apache.org/jira/browse/FLINK-9797
 Project: Flink
  Issue Type: Improvement
  Components: Type Serialization System
Reporter: Nico Kruber


State serializers need to maintain backwards compatibility and therefore the 
format cannot be changed that easily and this is honoured by the classes around 
{{TypeInformation}}. However, currently the same {{TypeInformation}} is being 
used in the network stack where data is serialized between two operators during 
shuffles and so. There, however, we do not need backwards compatibility and 
could easily change the format and have more performant code, e.g. use custom 
serializers for Java collections (not going through Kryo), etc.

I propose to separate these two (this is probably a bigger task since it is 
quite invasive).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release 1.5.1, release candidate #3

2018-07-11 Thread Chesnay Schepler

Correction on my part, it does affect all packages.

I've also found the cause. To speed up the process I only built modules 
that flink-dist depends on (see FLINK-9768). However flink-dist depends 
on neither flink-examples-batch nor flink-examples-streaming, yet 
happily accesses their target directory. The existing build process only 
worked since _by chance_ these 2 modules are built before flink-dist 
when doing a complete build.


I will rebuild the binaries (I don't think we have to cancel the RC for 
this) and open a JIRA to fix the dependencies.


On 11.07.2018 09:27, Chesnay Schepler wrote:

oh, the packages that include hadoop are really missing it...

On 11.07.2018 09:25, Chesnay Schepler wrote:
@Yaz which binary package did you check? I looked into the 
hadoop-free package and the folder is there.


Did you maybe encounter an error when extracting the package?

On 11.07.2018 05:44, Yaz Sh wrote:

-1

./examples/streaming folder is missing in binary packages


Cheers,
Yazdan


On Jul 10, 2018, at 9:57 PM, vino yang  wrote:

+1
reviewed [1], [4] and [6]

2018-07-11 3:10 GMT+08:00 Chesnay Schepler :


Hi everyone,
Please review and vote on the release candidate #3 for the version 
1.5.1,

as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which 
includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience 
releases to be

deployed to dist.apache.org [2], which are signed with the key with
fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc3" [5],
* website pull request listing the new release and adding 
announcement

blog post [6].

This RC is a slightly modified version of the previous RC, with most
release testing being applicable to both release candidates. The 
minimum

voting duration will hence be reduced to 24 hours. It is adopted by
majority approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje
ctId=12315522&version=12343053
[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] 
https://repository.apache.org/content/repositories/orgapacheflink-1171 


[5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
refs/tags/release-1.5.1-rc3
[6] https://github.com/apache/flink-web/pull/112

















Re: [VOTE] Release 1.5.1, release candidate #3

2018-07-11 Thread Chesnay Schepler

oh, the packages that include hadoop are really missing it...

On 11.07.2018 09:25, Chesnay Schepler wrote:
@Yaz which binary package did you check? I looked into the hadoop-free 
package and the folder is there.


Did you maybe encounter an error when extracting the package?

On 11.07.2018 05:44, Yaz Sh wrote:

-1

./examples/streaming folder is missing in binary packages


Cheers,
Yazdan


On Jul 10, 2018, at 9:57 PM, vino yang  wrote:

+1
reviewed [1], [4] and [6]

2018-07-11 3:10 GMT+08:00 Chesnay Schepler :


Hi everyone,
Please review and vote on the release candidate #3 for the version 
1.5.1,

as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which 
includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience 
releases to be

deployed to dist.apache.org [2], which are signed with the key with
fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc3" [5],
* website pull request listing the new release and adding announcement
blog post [6].

This RC is a slightly modified version of the previous RC, with most
release testing being applicable to both release candidates. The 
minimum

voting duration will hence be reduced to 24 hours. It is adopted by
majority approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje
ctId=12315522&version=12343053
[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] 
https://repository.apache.org/content/repositories/orgapacheflink-1171

[5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
refs/tags/release-1.5.1-rc3
[6] https://github.com/apache/flink-web/pull/112














Re: [VOTE] Release 1.5.1, release candidate #3

2018-07-11 Thread Chesnay Schepler
@Yaz which binary package did you check? I looked into the hadoop-free 
package and the folder is there.


Did you maybe encounter an error when extracting the package?

On 11.07.2018 05:44, Yaz Sh wrote:

-1

./examples/streaming folder is missing in binary packages


Cheers,
Yazdan


On Jul 10, 2018, at 9:57 PM, vino yang  wrote:

+1
reviewed [1], [4] and [6]

2018-07-11 3:10 GMT+08:00 Chesnay Schepler :


Hi everyone,
Please review and vote on the release candidate #3 for the version 1.5.1,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to be
deployed to dist.apache.org [2], which are signed with the key with
fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc3" [5],
* website pull request listing the new release and adding announcement
blog post [6].

This RC is a slightly modified version of the previous RC, with most
release testing being applicable to both release candidates. The minimum
voting duration will hence be reduced to 24 hours. It is adopted by
majority approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje
ctId=12315522&version=12343053
[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1171
[5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
refs/tags/release-1.5.1-rc3
[6] https://github.com/apache/flink-web/pull/112











Register

2018-07-11 Thread 陈梓立
Register