[jira] [Created] (FLINK-6843) ClientConnectionTest fails on travis

2017-06-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6843:
---

 Summary: ClientConnectionTest fails on travis
 Key: FLINK-6843
 URL: https://issues.apache.org/jira/browse/FLINK-6843
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.3.0
Reporter: Chesnay Schepler


jdk7, hadoop 2.4.1, scala 2.11

{code}
testJobManagerRetrievalWithHAServices(org.apache.flink.client.program.ClientConnectionTest)
  Time elapsed: 0.013 sec  <<< ERROR!

java.lang.UnsupportedClassVersionError: 
org/apache/flink/runtime/highavailability/TestingHighAvailabilityServices : 
Unsupported major.minor version 52.0

at java.lang.ClassLoader.defineClass1(Native Method)

at java.lang.ClassLoader.defineClass(ClassLoader.java:800)

at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)

at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)

at java.net.URLClassLoader.access$100(URLClassLoader.java:71)

at java.net.URLClassLoader$1.run(URLClassLoader.java:361)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

at 
org.apache.flink.client.program.ClientConnectionTest.testJobManagerRetrievalWithHAServices(ClientConnectionTest.java:122)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6842) Uncomment or remove code in HadoopFileSystem

2017-06-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6842:
---

 Summary: Uncomment or remove code in HadoopFileSystem
 Key: FLINK-6842
 URL: https://issues.apache.org/jira/browse/FLINK-6842
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Priority: Minor


I've found the following code in 
{{HadoopFileSystem#getHadoopWrapperClassNameForFileSystem}}

{code}

Configuration hadoopConf = getHadoopConfiguration();
Class clazz;
// We can activate this block once we drop Hadoop1 support (only hd2 has the 
getFileSystemClass-method)
//try {
//  clazz = org.apache.hadoop.fs.FileSystem.getFileSystemClass(scheme, 
hadoopConf);
//} catch (IOException e) {
//  LOG.info("Flink could not load the Hadoop File system implementation 
for scheme "+scheme);
//  return null;
//}
clazz = hadoopConf.getClass("fs." + scheme + ".impl", null, 
org.apache.hadoop.fs.FileSystem.class);

if (clazz != null && LOG.isDebugEnabled()) {
LOG.debug("Flink supports {} with the Hadoop file system wrapper, impl 
{}", scheme, clazz);
}
return clazz;
{code}

Since we don't support hadoop1 anymore the commented code should either be 
activated or removed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6841) using TableSourceTable for both Stream and Batch OR remove useless import

2017-06-02 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6841:
--

 Summary: using TableSourceTable for both Stream and Batch OR 
remove useless import
 Key: FLINK-6841
 URL: https://issues.apache.org/jira/browse/FLINK-6841
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng
Assignee: sunjincheng


1. {{StreamTableSourceTable}} exist useless import of {{TableException}}
2. {{StreamTableSourceTable}} only override {{getRowType}} of  {{FlinkTable}}, 
I think we can override the method in {{TableSourceTable}}, If so we can using 
{{TableSourceTable}} for both {{Stream}} and {{Batch}}.

What do you think? [~fhueske] [~twalthr]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6840) MultipleLinearRegression documentation contains outdated information

2017-06-02 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-6840:


 Summary: MultipleLinearRegression documentation contains outdated 
information
 Key: FLINK-6840
 URL: https://issues.apache.org/jira/browse/FLINK-6840
 Project: Flink
  Issue Type: Bug
  Components: Machine Learning Library
Reporter: Till Rohrmann
Assignee: Till Rohrmann


The documentation for {{MultipleLinearRegression}} contains outdated 
information. We should correct the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6839) Improve SQL OVER alias When only one OVER window agg in selection.

2017-06-02 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6839:
--

 Summary: Improve SQL OVER alias When only one OVER window agg in 
selection.
 Key: FLINK-6839
 URL: https://issues.apache.org/jira/browse/FLINK-6839
 Project: Flink
  Issue Type: Improvement
Reporter: sunjincheng


For OVER SQL:
{code}
SELECT a COUNT(c) OVER (ORDER BY proctime  RANGE BETWEEN INTERVAL '10' SECOND 
PRECEDING AND CURRENT ROW) as cnt1 FROM MyTable
{code}

We expect plan {{DataStreamCalc(select=[a, w0$o0 AS cnt1]) But we get 
{{DataStreamCalc(select=[a, w0$o0 AS $1]) }}. this improve only for plan check. 
 the functional is work well in nested queries,e.g.: 
{code}
SELECT cnt1 from (SELECT a COUNT(c) OVER (ORDER BY proctime  RANGE BETWEEN 
INTERVAL '10' SECOND PRECEDING AND CURRENT ROW) as cnt1 FROM MyTable) 
{code}
The SQL above is work well. which mentioned in 
[FLINK-6760|https://issues.apache.org/jira/browse/FLINK-6760].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6838) RescalingITCase fails in master branch

2017-06-02 Thread Ted Yu (JIRA)
Ted Yu created FLINK-6838:
-

 Summary: RescalingITCase fails in master branch
 Key: FLINK-6838
 URL: https://issues.apache.org/jira/browse/FLINK-6838
 Project: Flink
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


{code}
Tests in error:
  RescalingITCase.testSavepointRescalingInKeyedState[1] » JobExecution Job 
execu...
  RescalingITCase.testSavepointRescalingWithKeyedAndNonPartitionedState[1] » 
JobExecution
{code}
Both failed with similar cause:
{code}
testSavepointRescalingInKeyedState[1](org.apache.flink.test.checkpointing.RescalingITCase)
  Time elapsed: 4.813 sec  <<< ERROR!
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
  at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:933)
  at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:876)
  at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:876)
  at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
  at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
  at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
  at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
  at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
  at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
  at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
  at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.streaming.runtime.tasks.AsynchronousException: 
java.lang.Exception: Could not materialize checkpoint 4 for operator Flat Map 
-> Sink: Unnamed (1/2).
  at 
org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:967)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Could not materialize checkpoint 4 for operator 
Flat Map -> Sink: Unnamed (1/2).
  at 
org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:967)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Cannot 
register Closeable, registry is already closed. Closing argument.
  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
  at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43)
  at 
org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:894)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Cannot register Closeable, registry is already 
closed. Closing argument.
  at 
org.apache.flink.util.AbstractCloseableRegistry.registerClosable(AbstractCloseableRegistry.java:66)
  at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullSnapshotOperation.openCheckpointStream(RocksDBKeyedStateBackend.java:495)
  at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$3.openIOHandle(RocksDBKeyedStateBackend.java:394)
  at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$3.openIOHandle(RocksDBKeyedStateBackend.java:390)
  at 
org.apache.flink.runtime.io.async.AbstractAsyncIOCallable.call(AbstractAsyncIOCallable.java:67)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40)
  at 
org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:894)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 

[jira] [Created] (FLINK-6837) Fix a small error message bug, And improve some message info.

2017-06-02 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6837:
--

 Summary: Fix a small error message bug, And improve some message 
info.
 Key: FLINK-6837
 URL: https://issues.apache.org/jira/browse/FLINK-6837
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng
Assignee: sunjincheng


Fix a variable reference error, and improve some error message info.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6836) Failing YARNSessionCapacitySchedulerITCase.testTaskManagerFailure

2017-06-02 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-6836:


 Summary: Failing 
YARNSessionCapacitySchedulerITCase.testTaskManagerFailure
 Key: FLINK-6836
 URL: https://issues.apache.org/jira/browse/FLINK-6836
 Project: Flink
  Issue Type: Bug
  Components: Tests, YARN
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
Priority: Critical


The master is currently unstable. The 
{{YARNSessionCapacitySchedulerITCase.testTaskManagerFailure}} fails with Hadoop 
version {{2.6.5}}, {{2.7.3}} and {{2.8.0}}.

See this build [1] for example.

[1] https://travis-ci.org/apache/flink/builds/238720589



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-02 Thread Nico Kruber
while fixing build issues - what about FLINK-6654?

On Friday, 2 June 2017 11:05:34 CEST Robert Metzger wrote:
> Hi devs,
> 
> I would like to release Apache Flink 1.3.1 with the following fixes:
> 
> - FLINK-6812 Elasticsearch 5 release artifacts not published to Maven
> central
> - FLINK-6783 Wrongly extracted TypeInformations for
> WindowedStream::aggregate
> - FLINK-6780 ExternalTableSource should add time attributes in the row type
> - FLINK-6775 StateDescriptor cannot be shared by multiple subtasks
> - FLINK-6763 Inefficient PojoSerializerConfigSnapshot serialization format
> - FLINK-6764 Deduplicate stateless TypeSerializers when serializing
> composite TypeSerializers
> 
> Is there anything else that we need to wait for before we vote on the first
> RC?
> 
> 
> Regards,
> Robert



signature.asc
Description: This is a digitally signed message part.


[jira] [Created] (FLINK-6835) Document the checkstyle requirements

2017-06-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6835:
---

 Summary: Document the checkstyle requirements
 Key: FLINK-6835
 URL: https://issues.apache.org/jira/browse/FLINK-6835
 Project: Flink
  Issue Type: Improvement
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


We should document the checkstyle requirements somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6834) Over window doesn't support complex calculation

2017-06-02 Thread Jark Wu (JIRA)
Jark Wu created FLINK-6834:
--

 Summary: Over window doesn't support complex calculation
 Key: FLINK-6834
 URL: https://issues.apache.org/jira/browse/FLINK-6834
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Jark Wu
Assignee: Jark Wu
 Fix For: 1.4.0


The following example
{code}
val windowedTable = table
  .window(
Over partitionBy 'c orderBy 'rowtime preceding UNBOUNDED_ROW as 'w)
  .select('c, 'b, ('a.count over 'w) + 1)
{code}

will throw exception: 
{code}
org.apache.flink.table.api.ValidationException: Expression 
UnresolvedOverCall(count('a),'w) failed on input check: Over window with alias 
$alias could not be resolved.

at 
org.apache.flink.table.plan.logical.LogicalNode.failValidation(LogicalNode.scala:143)
at 
org.apache.flink.table.plan.logical.LogicalNode$$anonfun$validate$1.applyOrElse(LogicalNode.scala:89)
at 
org.apache.flink.table.plan.logical.LogicalNode$$anonfun$validate$1.applyOrElse(LogicalNode.scala:83)
at 
org.apache.flink.table.plan.TreeNode.postOrderTransform(TreeNode.scala:72)
at 
org.apache.flink.table.plan.TreeNode$$anonfun$1.apply(TreeNode.scala:46)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6833) Race condition: Asynchronous checkpointing task can fail completed StreamTask

2017-06-02 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-6833:


 Summary: Race condition: Asynchronous checkpointing task can fail 
completed StreamTask
 Key: FLINK-6833
 URL: https://issues.apache.org/jira/browse/FLINK-6833
 Project: Flink
  Issue Type: Bug
  Components: Local Runtime, State Backends, Checkpointing
Affects Versions: 1.3.0, 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
Priority: Critical


A {{StreamTask}} which is about to finish and thus transitioning its containing 
{{Task}} into the {{ExecutionState.FINISHED}} state, can be failed by a 
concurrent asynchronous checkpointing operation. The problem is that upon 
termination the {{StreamTask}} cancels all concurrent operations (amongst 
others ongoing asynchronous checkpoints). The cancellation of the async 
checkpoint triggers the {{StreamTask#handleAsyncException}} call which will 
fail the containing {{Task}}. If the {{handleAsyncException}} completes before 
the {{StreamTask}} has been properly terminated, then the containing {{Task}} 
will transition into {{ExecutionState.FAILED}} instead of 
{{ExecutionState.FINISHED}}.

In order to resolve this race condition, we should check in the 
{{StreamTask#handleAsyncException}} whether the {{StreamTask}} is still running 
or has already been terminated. Only in the former case, we should fail the 
containing {{Task}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6832) Missing artifact flink-connector-filsystem

2017-06-02 Thread Luis (JIRA)
Luis created FLINK-6832:
---

 Summary: Missing artifact flink-connector-filsystem
 Key: FLINK-6832
 URL: https://issues.apache.org/jira/browse/FLINK-6832
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.3.0
Reporter: Luis
Priority: Critical
 Fix For: 1.3.0


The pom file of flink-connectors has declared as child the missing artifact 
flink-connector-filsystem. The code was release under link 

http://www-us.apache.org/dist/flink/flink-1.3.0/flink-1.3.0-src.tgz



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6831) Activate checkstyle for runtime/*

2017-06-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6831:
---

 Summary: Activate checkstyle for runtime/*
 Key: FLINK-6831
 URL: https://issues.apache.org/jira/browse/FLINK-6831
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Trivial
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6830) Add ITTests for savepoint migration from 1.3

2017-06-02 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-6830:
--

 Summary: Add ITTests for savepoint migration from 1.3
 Key: FLINK-6830
 URL: https://issues.apache.org/jira/browse/FLINK-6830
 Project: Flink
  Issue Type: Test
  Components: State Backends, Checkpointing
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai
 Fix For: 1.3.1


Already with FLINK-6763 and FLINK-6764 we'll need to change the serialization 
formats between 1.3.0 and 1.3.x.

We probably should add the stateful job migration ITCases for restoring from 
Flink 1.3.x now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6829) Problem with release code in maven comile mode

2017-06-02 Thread Luis (JIRA)
Luis created FLINK-6829:
---

 Summary: Problem with release code in maven comile mode
 Key: FLINK-6829
 URL: https://issues.apache.org/jira/browse/FLINK-6829
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.3.0
Reporter: Luis
Priority: Critical
 Fix For: 1.3.0


The pom.xml file in flink 1.3.0 project dont compile with maven. Have declared 
as child the module flink-annotations. That module is used as maven dependency. 
The code was release under link 
http://www-us.apache.org/dist/flink/flink-1.3.0/flink-1.3.0-src.tgz



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6827) Activate checkstyle for runtime/webmonitor

2017-06-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6827:
---

 Summary: Activate checkstyle for runtime/webmonitor
 Key: FLINK-6827
 URL: https://issues.apache.org/jira/browse/FLINK-6827
 Project: Flink
  Issue Type: Improvement
  Components: Webfrontend
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Trivial
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6823) Activate checkstyle for runtime/broadcast

2017-06-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6823:
---

 Summary: Activate checkstyle for runtime/broadcast
 Key: FLINK-6823
 URL: https://issues.apache.org/jira/browse/FLINK-6823
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Reporter: Chesnay Schepler
Priority: Trivial
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6824) Activate checkstyle for runtime/event

2017-06-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6824:
---

 Summary: Activate checkstyle for runtime/event
 Key: FLINK-6824
 URL: https://issues.apache.org/jira/browse/FLINK-6824
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Trivial
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6822) Activate checkstyle for runtime/plugable

2017-06-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6822:
---

 Summary: Activate checkstyle for runtime/plugable
 Key: FLINK-6822
 URL: https://issues.apache.org/jira/browse/FLINK-6822
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Reporter: Chesnay Schepler
Priority: Trivial
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6820) Activate checkstyle for runtime/filecache

2017-06-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6820:
---

 Summary: Activate checkstyle for runtime/filecache
 Key: FLINK-6820
 URL: https://issues.apache.org/jira/browse/FLINK-6820
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Reporter: Chesnay Schepler
Priority: Trivial
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6818) Activate checkstyle for runtime/history

2017-06-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6818:
---

 Summary: Activate checkstyle for runtime/history
 Key: FLINK-6818
 URL: https://issues.apache.org/jira/browse/FLINK-6818
 Project: Flink
  Issue Type: Improvement
  Components: History Server
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Trivial
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-02 Thread Robert Metzger
I don't know for sure. Stephan worked on this feature.
I think the behavior / log messages are misleading and confusing right now.

On Fri, Jun 2, 2017 at 12:28 PM, Tzu-Li (Gordon) Tai 
wrote:

> Should https://issues.apache.org/jira/browse/FLINK-6643 (Flink restarts
> job in HA even if NoRestartStrategy is set) also be included?
>
>
> On 2 June 2017 at 11:21:33 AM, Robert Metzger (rmetz...@apache.org) wrote:
>
> I agree with both!
>
> I've attached a label to all JIRAs for the 1.3.1 release. Once this list
> has 0 open issues, I'll create 1.3.1:
> https://issues.apache.org/jira/issues/?jql=labels%20%3D%
> 20flink-rel-1.3.1-blockers
>
> On Fri, Jun 2, 2017 at 11:14 AM, Chesnay Schepler 
> wrote:
>
> > We should give a good error message when the state migration fails as
> > described in FLINK-6742.
> >
> >
> > On 02.06.2017 11:05, Robert Metzger wrote:
> >
> >> Hi devs,
> >>
> >> I would like to release Apache Flink 1.3.1 with the following fixes:
> >>
> >> - FLINK-6812 Elasticsearch 5 release artifacts not published to Maven
> >> central
> >> - FLINK-6783 Wrongly extracted TypeInformations for
> >> WindowedStream::aggregate
> >> - FLINK-6780 ExternalTableSource should add time attributes in the row
> >> type
> >> - FLINK-6775 StateDescriptor cannot be shared by multiple subtasks
> >> - FLINK-6763 Inefficient PojoSerializerConfigSnapshot serialization
> format
> >> - FLINK-6764 Deduplicate stateless TypeSerializers when serializing
> >> composite TypeSerializers
> >>
> >> Is there anything else that we need to wait for before we vote on the
> >> first
> >> RC?
> >>
> >>
> >> Regards,
> >> Robert
> >>
> >>
> >
>


[jira] [Created] (FLINK-6817) Fix NPE when preceding is not set in OVER window

2017-06-02 Thread Jark Wu (JIRA)
Jark Wu created FLINK-6817:
--

 Summary: Fix NPE when preceding is not set in OVER window
 Key: FLINK-6817
 URL: https://issues.apache.org/jira/browse/FLINK-6817
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Jark Wu
Priority: Minor


When preceding is not set in over window , a NPE will be thrown:

{code}
val result = table
  .window(Over orderBy 'rowtime as 'w)
  .select('c, 'a.count over 'w)
{code}

{code}
java.lang.NullPointerException
at org.apache.flink.table.api.OverWindowWithOrderBy.as(windows.scala:97)
{code}

Preceding must be set in OVER window, so should throw a more explicit exception 
not a NPE



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6816) Fix wrong usage of Scala string interpolation in Table API

2017-06-02 Thread Jark Wu (JIRA)
Jark Wu created FLINK-6816:
--

 Summary: Fix wrong usage of Scala string interpolation in Table API
 Key: FLINK-6816
 URL: https://issues.apache.org/jira/browse/FLINK-6816
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Jark Wu
Assignee: Jark Wu
Priority: Minor


This issue is to fix some wrong usage of  Scala string interpolation, such as 
missing the "s" prefix .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-02 Thread Tzu-Li (Gordon) Tai
Should https://issues.apache.org/jira/browse/FLINK-6643 (Flink restarts job in 
HA even if NoRestartStrategy is set) also be included?


On 2 June 2017 at 11:21:33 AM, Robert Metzger (rmetz...@apache.org) wrote:

I agree with both!  

I've attached a label to all JIRAs for the 1.3.1 release. Once this list  
has 0 open issues, I'll create 1.3.1:  
https://issues.apache.org/jira/issues/?jql=labels%20%3D%20flink-rel-1.3.1-blockers
  

On Fri, Jun 2, 2017 at 11:14 AM, Chesnay Schepler   
wrote:  

> We should give a good error message when the state migration fails as  
> described in FLINK-6742.  
>  
>  
> On 02.06.2017 11:05, Robert Metzger wrote:  
>  
>> Hi devs,  
>>  
>> I would like to release Apache Flink 1.3.1 with the following fixes:  
>>  
>> - FLINK-6812 Elasticsearch 5 release artifacts not published to Maven  
>> central  
>> - FLINK-6783 Wrongly extracted TypeInformations for  
>> WindowedStream::aggregate  
>> - FLINK-6780 ExternalTableSource should add time attributes in the row  
>> type  
>> - FLINK-6775 StateDescriptor cannot be shared by multiple subtasks  
>> - FLINK-6763 Inefficient PojoSerializerConfigSnapshot serialization format  
>> - FLINK-6764 Deduplicate stateless TypeSerializers when serializing  
>> composite TypeSerializers  
>>  
>> Is there anything else that we need to wait for before we vote on the  
>> first  
>> RC?  
>>  
>>  
>> Regards,  
>> Robert  
>>  
>>  
>  


Re: Building only flink-java8 module

2017-06-02 Thread Aljoscha Krettek
Unfortunately I don’t know how to suppress running the tests in the dependency 
modules that are also built.

> On 2. Jun 2017, at 11:44, Chesnay Schepler  wrote:
> 
> If all else fails, hard-code the scala versions into all poms. Simple text 
> search + replace.
> 
> On 02.06.2017 11:30, Dawid Wysakowicz wrote:
>> Unfortunately I had no luck with both of those approaches. The first one
>> runs tests for all dependent modules, while the second one (which I already
>> tried using fails with the provided error). Maybe some ideas how can I
>> run/compiles tests using eclipse compiler from IntelliJ? I need to test
>> lambdas behaviour for FLINK-6783. Any ideas highly appreciated.
>> 
>> Z pozdrowieniami! / Cheers!
>> 
>> Dawid Wysakowicz
>> 
>> *Data/Software Engineer*
>> 
>> Skype: dawid_wys | Twitter: @OneMoreCoder
>> 
>> 
>> 
>> 2017-06-01 18:33 GMT+02:00 Dawid Wysakowicz :
>> 
>>> I tried the second approach before and it results in the error with
>>> scala.binary.version I attached, which is the same Ted got. I use it though
>>> for other modules and it works.
>>> 
>>> I will try the first approach soon.
>>> 
>>> Z pozdrowieniami! / Cheers!
>>> 
>>> Dawid Wysakowicz
>>> 
>>> *Data/Software Engineer*
>>> 
>>> Skype: dawid_wys | Twitter: @OneMoreCoder
>>> 
>>> 
>>> 
>>> 2017-06-01 18:14 GMT+02:00 Ted Yu :
>>> 
 That removes the error.
 However, looks like tests from other module(s) are run as well.
 Just an example:
 
 
 
 16:12:53,103 INFO
  org.apache.flink.runtime.taskmanager.TaskManagerRegistrationTest  -
 
 
 
 On Thu, Jun 1, 2017 at 9:08 AM, Aljoscha Krettek 
 wrote:
 
> Ah, I forgot that you also have to add “-Pjdk8” to activate the Java 8
> profile. Otherwise the flink-java8 module will not be referenced in the
> main pom.
> 
>> On 1. Jun 2017, at 17:42, Ted Yu  wrote:
>> 
>> When using the second approach (install followed by 'mvn verify'), I
 got
>> the following:
>> 
>> [ERROR] Failed to execute goal on project flink-java8_2.10: Could not
>> resolve dependencies for project
>> org.apache.flink:flink-java8_2.10:jar:1.4-SNAPSHOT: Failed to collect
>> dependencies at
>> org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT: Failed
 to
> read
>> artifact descriptor for
>> org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT: Failure
 to
>> find
>> org.apache.flink:flink-examples_${scala.binary.version}:pom:
 1.4-SNAPSHOT
> in
>> https://repository.apache.org/snapshots was cached in the local
> repository,
>> resolution will not be reattempted until the update interval of
>> apache.snapshots has elapsed or updates are forced -> [Help 1]
>> 
>> Looks like ${scala.binary.version} was not substituted for correctly.
>> 
>> On Thu, Jun 1, 2017 at 8:24 AM, Ted Yu  wrote:
>> 
>>> When I used the command given by Aljoscha, I got:
>>> 
>>> https://pastebin.com/8WTGvdFQ
>>> 
>>> FYI
>>> 
>>> On Thu, Jun 1, 2017 at 8:17 AM, Aljoscha Krettek <
 aljos...@apache.org>
>>> wrote:
>>> 
 Hi,
 
 I think you can use something like
 
 mvn verify -am -pl flink-java8
 
 (From the base directory)
 
 The -pl flag will tell maven to only do that module while -am tells
 it
> to
 also builds its dependencies. This might or might not also run the
> tests on
 the dependent-upon projects, I’m not sure.
 
 As an alternative you can do “mvn clean install …” (skipping tests
 and
 everything) and then switch into the flink-java8 directory and run
 “mvn
 verify” there.
 
 Best,
 Aljoscha
 
 
> On 1. Jun 2017, at 16:04, Dawid Wysakowicz <
> wysakowicz.da...@gmail.com>
 wrote:
> Hi devs!
> 
> Recently I tried running* mvn verify* just for the *flink-java8*
> module
 (to
> run those tests locally) and it fails with the following error:
> 
> [ERROR] Failed to execute goal on project flink-java8_2.10: Could
 not
>> resolve dependencies for project
>> org.apache.flink:flink-java8_2.10:jar:1.4-SNAPSHOT: Failed to
> collect
>> dependencies at
>> org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT:
 Failed
 to read
>> artifact descriptor for
>> org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT:
 Failure
 to
>> find

[jira] [Created] (FLINK-6815) Javadocs don't work anymore in Flink 1.4-SNAPSHOT

2017-06-02 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-6815:
-

 Summary: Javadocs don't work anymore in Flink 1.4-SNAPSHOT
 Key: FLINK-6815
 URL: https://issues.apache.org/jira/browse/FLINK-6815
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.4.0
Reporter: Robert Metzger


https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/scala/KeyedStream.html
 results in a 404 error.

The problem 
(https://ci.apache.org/builders/flink-docs-master/builds/731/steps/Java%20&%20Scala%20docs/logs/stdio)
 is the following:

{code}
[ERROR] Failed to execute goal 
net.alchim31.maven:scala-maven-plugin:3.2.2:compile (doc) on project 
flink-annotations: wrap: 
org.apache.maven.artifact.resolver.ArtifactNotFoundException: Could not find 
artifact com.typesafe.genjavadoc:genjavadoc-plugin_2.10.6:jar:0.8 in central 
(https://repo.maven.apache.org/maven2)
{code}

I think the problem is that we upgraded the scala version to 2.10.6, but the 
plugin doesn't have version 0.8 for that scala version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6814) Store information about whether or not a registered state is queryable in checkpoints

2017-06-02 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-6814:
--

 Summary: Store information about whether or not a registered state 
is queryable in checkpoints
 Key: FLINK-6814
 URL: https://issues.apache.org/jira/browse/FLINK-6814
 Project: Flink
  Issue Type: Improvement
  Components: Queryable State, State Backends, Checkpointing
Reporter: Tzu-Li (Gordon) Tai


Currently, we store almost all information that comes with the registered 
state's {{StateDescriptor}}s (state name, state serializer, etc.) in 
checkpoints, except from information about whether or not the state is 
queryable. I propose to add that also.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Building only flink-java8 module

2017-06-02 Thread Chesnay Schepler
If all else fails, hard-code the scala versions into all poms. Simple 
text search + replace.


On 02.06.2017 11:30, Dawid Wysakowicz wrote:

Unfortunately I had no luck with both of those approaches. The first one
runs tests for all dependent modules, while the second one (which I already
tried using fails with the provided error). Maybe some ideas how can I
run/compiles tests using eclipse compiler from IntelliJ? I need to test
lambdas behaviour for FLINK-6783. Any ideas highly appreciated.

Z pozdrowieniami! / Cheers!

Dawid Wysakowicz

*Data/Software Engineer*

Skype: dawid_wys | Twitter: @OneMoreCoder



2017-06-01 18:33 GMT+02:00 Dawid Wysakowicz :


I tried the second approach before and it results in the error with
scala.binary.version I attached, which is the same Ted got. I use it though
for other modules and it works.

I will try the first approach soon.

Z pozdrowieniami! / Cheers!

Dawid Wysakowicz

*Data/Software Engineer*

Skype: dawid_wys | Twitter: @OneMoreCoder



2017-06-01 18:14 GMT+02:00 Ted Yu :


That removes the error.
However, looks like tests from other module(s) are run as well.
Just an example:



16:12:53,103 INFO
  org.apache.flink.runtime.taskmanager.TaskManagerRegistrationTest  -



On Thu, Jun 1, 2017 at 9:08 AM, Aljoscha Krettek 
wrote:


Ah, I forgot that you also have to add “-Pjdk8” to activate the Java 8
profile. Otherwise the flink-java8 module will not be referenced in the
main pom.


On 1. Jun 2017, at 17:42, Ted Yu  wrote:

When using the second approach (install followed by 'mvn verify'), I

got

the following:

[ERROR] Failed to execute goal on project flink-java8_2.10: Could not
resolve dependencies for project
org.apache.flink:flink-java8_2.10:jar:1.4-SNAPSHOT: Failed to collect
dependencies at
org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT: Failed

to

read

artifact descriptor for
org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT: Failure

to

find
org.apache.flink:flink-examples_${scala.binary.version}:pom:

1.4-SNAPSHOT

in

https://repository.apache.org/snapshots was cached in the local

repository,

resolution will not be reattempted until the update interval of
apache.snapshots has elapsed or updates are forced -> [Help 1]

Looks like ${scala.binary.version} was not substituted for correctly.

On Thu, Jun 1, 2017 at 8:24 AM, Ted Yu  wrote:


When I used the command given by Aljoscha, I got:

https://pastebin.com/8WTGvdFQ

FYI

On Thu, Jun 1, 2017 at 8:17 AM, Aljoscha Krettek <

aljos...@apache.org>

wrote:


Hi,

I think you can use something like

mvn verify -am -pl flink-java8

(From the base directory)

The -pl flag will tell maven to only do that module while -am tells

it

to

also builds its dependencies. This might or might not also run the

tests on

the dependent-upon projects, I’m not sure.

As an alternative you can do “mvn clean install …” (skipping tests

and

everything) and then switch into the flink-java8 directory and run

“mvn

verify” there.

Best,
Aljoscha



On 1. Jun 2017, at 16:04, Dawid Wysakowicz <

wysakowicz.da...@gmail.com>

wrote:

Hi devs!

Recently I tried running* mvn verify* just for the *flink-java8*

module

(to

run those tests locally) and it fails with the following error:

[ERROR] Failed to execute goal on project flink-java8_2.10: Could

not

resolve dependencies for project
org.apache.flink:flink-java8_2.10:jar:1.4-SNAPSHOT: Failed to

collect

dependencies at
org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT:

Failed

to read

artifact descriptor for
org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT:

Failure

to

find
org.apache.flink:flink-examples_${scala.binary.

version}:pom:1.4-SNAPSHOT

in

https://repository.apache.org/snapshots was cached in the local
repository, resolution will not be reattempted until the update

interval of

apache.snapshots has elapsed or updates are forced -> [Help 1]


That strategy works for any other module I tried. I would be

grateful

for

any tips how can I run just tests for flink-java8 locally.

Thanks in advance.

Z pozdrowieniami! / Cheers!

Dawid Wysakowicz

*Data/Software Engineer*

Skype: dawid_wys | Twitter: @OneMoreCoder












Re: Building only flink-java8 module

2017-06-02 Thread Dawid Wysakowicz
Unfortunately I had no luck with both of those approaches. The first one
runs tests for all dependent modules, while the second one (which I already
tried using fails with the provided error). Maybe some ideas how can I
run/compiles tests using eclipse compiler from IntelliJ? I need to test
lambdas behaviour for FLINK-6783. Any ideas highly appreciated.

Z pozdrowieniami! / Cheers!

Dawid Wysakowicz

*Data/Software Engineer*

Skype: dawid_wys | Twitter: @OneMoreCoder



2017-06-01 18:33 GMT+02:00 Dawid Wysakowicz :

> I tried the second approach before and it results in the error with
> scala.binary.version I attached, which is the same Ted got. I use it though
> for other modules and it works.
>
> I will try the first approach soon.
>
> Z pozdrowieniami! / Cheers!
>
> Dawid Wysakowicz
>
> *Data/Software Engineer*
>
> Skype: dawid_wys | Twitter: @OneMoreCoder
>
> 
>
> 2017-06-01 18:14 GMT+02:00 Ted Yu :
>
>> That removes the error.
>> However, looks like tests from other module(s) are run as well.
>> Just an example:
>>
>> 
>> 
>> 16:12:53,103 INFO
>>  org.apache.flink.runtime.taskmanager.TaskManagerRegistrationTest  -
>> 
>> 
>>
>> On Thu, Jun 1, 2017 at 9:08 AM, Aljoscha Krettek 
>> wrote:
>>
>> > Ah, I forgot that you also have to add “-Pjdk8” to activate the Java 8
>> > profile. Otherwise the flink-java8 module will not be referenced in the
>> > main pom.
>> >
>> > > On 1. Jun 2017, at 17:42, Ted Yu  wrote:
>> > >
>> > > When using the second approach (install followed by 'mvn verify'), I
>> got
>> > > the following:
>> > >
>> > > [ERROR] Failed to execute goal on project flink-java8_2.10: Could not
>> > > resolve dependencies for project
>> > > org.apache.flink:flink-java8_2.10:jar:1.4-SNAPSHOT: Failed to collect
>> > > dependencies at
>> > > org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT: Failed
>> to
>> > read
>> > > artifact descriptor for
>> > > org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT: Failure
>> to
>> > > find
>> > > org.apache.flink:flink-examples_${scala.binary.version}:pom:
>> 1.4-SNAPSHOT
>> > in
>> > > https://repository.apache.org/snapshots was cached in the local
>> > repository,
>> > > resolution will not be reattempted until the update interval of
>> > > apache.snapshots has elapsed or updates are forced -> [Help 1]
>> > >
>> > > Looks like ${scala.binary.version} was not substituted for correctly.
>> > >
>> > > On Thu, Jun 1, 2017 at 8:24 AM, Ted Yu  wrote:
>> > >
>> > >> When I used the command given by Aljoscha, I got:
>> > >>
>> > >> https://pastebin.com/8WTGvdFQ
>> > >>
>> > >> FYI
>> > >>
>> > >> On Thu, Jun 1, 2017 at 8:17 AM, Aljoscha Krettek <
>> aljos...@apache.org>
>> > >> wrote:
>> > >>
>> > >>> Hi,
>> > >>>
>> > >>> I think you can use something like
>> > >>>
>> > >>> mvn verify -am -pl flink-java8
>> > >>>
>> > >>> (From the base directory)
>> > >>>
>> > >>> The -pl flag will tell maven to only do that module while -am tells
>> it
>> > to
>> > >>> also builds its dependencies. This might or might not also run the
>> > tests on
>> > >>> the dependent-upon projects, I’m not sure.
>> > >>>
>> > >>> As an alternative you can do “mvn clean install …” (skipping tests
>> and
>> > >>> everything) and then switch into the flink-java8 directory and run
>> “mvn
>> > >>> verify” there.
>> > >>>
>> > >>> Best,
>> > >>> Aljoscha
>> > >>>
>> > >>>
>> >  On 1. Jun 2017, at 16:04, Dawid Wysakowicz <
>> > wysakowicz.da...@gmail.com>
>> > >>> wrote:
>> > 
>> >  Hi devs!
>> > 
>> >  Recently I tried running* mvn verify* just for the *flink-java8*
>> > module
>> > >>> (to
>> >  run those tests locally) and it fails with the following error:
>> > 
>> >  [ERROR] Failed to execute goal on project flink-java8_2.10: Could
>> not
>> > > resolve dependencies for project
>> > > org.apache.flink:flink-java8_2.10:jar:1.4-SNAPSHOT: Failed to
>> > collect
>> > > dependencies at
>> > > org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT:
>> Failed
>> > >>> to read
>> > > artifact descriptor for
>> > > org.apache.flink:flink-examples-batch_2.10:jar:1.4-SNAPSHOT:
>> Failure
>> > >>> to
>> > > find
>> > > org.apache.flink:flink-examples_${scala.binary.
>> > version}:pom:1.4-SNAPSHOT
>> > >>> in
>> > > https://repository.apache.org/snapshots was cached in the local
>> > > repository, resolution will not be reattempted until the update
>> > >>> interval of
>> > > apache.snapshots has elapsed or updates are forced -> [Help 1]
>> > >
>> > 
>> >  That strategy works for any other module I tried. I would be
>> grateful
>> > >>> for
>> >  any tips how can I run just tests 

Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-02 Thread Robert Metzger
I agree with both!

I've attached a label to all JIRAs for the 1.3.1 release. Once this list
has 0 open issues, I'll create 1.3.1:
https://issues.apache.org/jira/issues/?jql=labels%20%3D%20flink-rel-1.3.1-blockers

On Fri, Jun 2, 2017 at 11:14 AM, Chesnay Schepler 
wrote:

> We should give a good error message when the state migration fails as
> described in FLINK-6742.
>
>
> On 02.06.2017 11:05, Robert Metzger wrote:
>
>> Hi devs,
>>
>> I would like to release Apache Flink 1.3.1 with the following fixes:
>>
>> - FLINK-6812 Elasticsearch 5 release artifacts not published to Maven
>> central
>> - FLINK-6783 Wrongly extracted TypeInformations for
>> WindowedStream::aggregate
>> - FLINK-6780 ExternalTableSource should add time attributes in the row
>> type
>> - FLINK-6775 StateDescriptor cannot be shared by multiple subtasks
>> - FLINK-6763 Inefficient PojoSerializerConfigSnapshot serialization format
>> - FLINK-6764 Deduplicate stateless TypeSerializers when serializing
>> composite TypeSerializers
>>
>> Is there anything else that we need to wait for before we vote on the
>> first
>> RC?
>>
>>
>> Regards,
>> Robert
>>
>>
>


Re: [DISCUSS] Planning Release 1.4

2017-06-02 Thread Robert Metzger
I agree that it was quite annoying to merge everything to two branches.
But part of that problem was that many big features were merged last minute
and then fixed after the feature freeze.
In an ideal world, all features are stable, tested and documented when the
feature freeze happens and most commits go into master only.

I wonder if we can manage to merge queued minor features before the feature
freeze to avoid the issue in the future?

If we all agree that this doesn't work, we can also try to delay the
feature freeze. I just fear that this will make it harder to meet the
release deadline.


Robert

On Thu, Jun 1, 2017 at 6:05 PM, Greg Hogan  wrote:

> I’d like to propose keeping the same schedule but move branch forking from
> the feature freeze to the code freeze. The early fork required duplicate
> verification and commits for numerous bug fixes and minor features which
> had been reviewed but were still queued. There did not look to be much new
> development merged to master between the freezes.
>
> Greg
>
>
> > On Jun 1, 2017, at 11:26 AM, Robert Metzger  wrote:
> >
> > Hi all,
> >
> > Flink 1.2 was released on February 2, Flink 1.3 on June 1, which means
> > we've managed to release Flink 1.3 in almost exactly 4 months!
> >
> > For the 1.4 release, I've put the following deadlines into the wiki [1]:
> >
> > *Next scheduled major release*: 1.4.0
> > *Feature freeze (branch forking)*:  4. September 2017
> > *Code freeze (first voting RC)*:  18 September 2017
> > *Release date*: 29 September 2017
> >
> > I'll try to send a message every month into this thread to have a
> countdown
> > to the next feature freeze.
> >
> >
> > [1]
> > https://cwiki.apache.org/confluence/display/FLINK/
> Flink+Release+and+Feature+Plan
>
>


Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-02 Thread Tzu-Li (Gordon) Tai
Thanks for starting the discussion, Robert.

IMO, we should also include https://issues.apache.org/jira/browse/FLINK-6772 
(Incorrect ordering of matched state events in Flink CEP).

Cheers,
Gordon

On 2 June 2017 at 11:06:02 AM, Robert Metzger (rmetz...@apache.org) wrote:

Hi devs,  

I would like to release Apache Flink 1.3.1 with the following fixes:  

- FLINK-6812 Elasticsearch 5 release artifacts not published to Maven  
central  
- FLINK-6783 Wrongly extracted TypeInformations for  
WindowedStream::aggregate  
- FLINK-6780 ExternalTableSource should add time attributes in the row type  
- FLINK-6775 StateDescriptor cannot be shared by multiple subtasks  
- FLINK-6763 Inefficient PojoSerializerConfigSnapshot serialization format  
- FLINK-6764 Deduplicate stateless TypeSerializers when serializing  
composite TypeSerializers  

Is there anything else that we need to wait for before we vote on the first  
RC?  


Regards,  
Robert  


Re: flink-connector-elasticsearch5_2.11 version 1.3.0 is missing

2017-06-02 Thread Tzu-Li (Gordon) Tai
Hi Fritz,

You’re right, the ES5 connector was not published to Maven central, you’re 
specifying the artifactId correctly. Thanks for reporting this.

Really sorry about this, this would definitely be a blocker that we should fix 
soon for 1.3.1: https://issues.apache.org/jira/browse/FLINK-6812

For the time being, could you continue using 1.3-SNAPSHOT? Nothing was changed 
to the ES5 connector since it was merged, so it should be fully compatible.
We should be able to expect the fix for this in 1.3.1 very soon (there was 
already discussion to release 1.3.1 in the next couple of days).

Cheers,
Gordon

On 1 June 2017 at 8:43:27 PM, Fritz Budiyanto (fbudi...@icloud.com) wrote:

Hi All,  

I updated my pom file to use the newly release 1.3.0, and my build failed.  
Looks like there is no flink connector for ES5 with version 1.3.0.  
Could you someone release it ?  
Am I using the right artifactId connector for ES5/1.3.0 ?  

Thanks,  
Fritz  

  
org.apache.flink  
flink-connector-elasticsearch5_2.11  
+ 1.3.0  
- 1.3-SNAPSHOT  




[jira] [Created] (FLINK-6813) Add DATEDIFF as build-in scalar function

2017-06-02 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6813:
--

 Summary: Add DATEDIFF as build-in scalar function
 Key: FLINK-6813
 URL: https://issues.apache.org/jira/browse/FLINK-6813
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng
Assignee: sunjincheng


* Syntax
DATEDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT DATEDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6812) Elasticsearch 5 release artifacts not published to Maven central

2017-06-02 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-6812:
--

 Summary: Elasticsearch 5 release artifacts not published to Maven 
central
 Key: FLINK-6812
 URL: https://issues.apache.org/jira/browse/FLINK-6812
 Project: Flink
  Issue Type: Bug
  Components: ElasticSearch Connector
Affects Versions: 1.3.0
Reporter: Tzu-Li (Gordon) Tai
Priority: Blocker
 Fix For: 1.3.1


Release artifacts for the Elasticsearch 5 connector is not published to the 
Maven Central. Elasticsearch 5 requires Java 8 at minimum, so for the release 
we need to build with Java 8 for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6811) Add DATEADD/DATESUB/DATEDIFF as build-in scalar function

2017-06-02 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6811:
--

 Summary: Add DATEADD/DATESUB/DATEDIFF as build-in scalar function
 Key: FLINK-6811
 URL: https://issues.apache.org/jira/browse/FLINK-6811
 Project: Flink
  Issue Type: Sub-task
Reporter: sunjincheng






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6809) side outputs documentation: wrong variable name in java example code

2017-06-02 Thread Petr Novotnik (JIRA)
Petr Novotnik created FLINK-6809:


 Summary: side outputs documentation: wrong variable name in java 
example code
 Key: FLINK-6809
 URL: https://issues.apache.org/jira/browse/FLINK-6809
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.3.0
Reporter: Petr Novotnik
Priority: Trivial


The first parameter to the {{processElement}} method in the example for 
side-outputs 
[here|https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/side_output.html]
 is wrongly named {{input}}, but should read {{value}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)