[jira] [Created] (FLINK-6693) Support DATE_FORMAT function in the SQL API

2017-05-23 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6693:
-

 Summary: Support DATE_FORMAT function in the SQL API
 Key: FLINK-6693
 URL: https://issues.apache.org/jira/browse/FLINK-6693
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Haohui Mai
Assignee: Haohui Mai


It would be quite handy to support the {{DATE_FORMAT}} function in Flink to 
support various date / time related operations:

The specification of the {{DATE_FORMAT}} function can be found in 
https://prestodb.io/docs/current/functions/datetime.html.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6692) The flink-dist jar contains unshaded nettyjar

2017-05-23 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6692:
-

 Summary: The flink-dist jar contains unshaded nettyjar
 Key: FLINK-6692
 URL: https://issues.apache.org/jira/browse/FLINK-6692
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 1.3.0


The {{flink-dist}} jar contains unshaded netty 3 and netty 4 classes:

{noformat}
io/netty/handler/codec/http/router/
io/netty/handler/codec/http/router/BadClientSilencer.class
io/netty/handler/codec/http/router/MethodRouted.class
io/netty/handler/codec/http/router/Handler.class
io/netty/handler/codec/http/router/Router.class
io/netty/handler/codec/http/router/DualMethodRouter.class
io/netty/handler/codec/http/router/Routed.class
io/netty/handler/codec/http/router/AbstractHandler.class
io/netty/handler/codec/http/router/KeepAliveWrite.class
io/netty/handler/codec/http/router/DualAbstractHandler.class
io/netty/handler/codec/http/router/MethodRouter.class
{noformat}

{noformat}
org/jboss/netty/util/internal/jzlib/InfBlocks.class
org/jboss/netty/util/internal/jzlib/InfCodes.class
org/jboss/netty/util/internal/jzlib/InfTree.class
org/jboss/netty/util/internal/jzlib/Inflate$1.class
org/jboss/netty/util/internal/jzlib/Inflate.class
org/jboss/netty/util/internal/jzlib/JZlib$WrapperType.class
org/jboss/netty/util/internal/jzlib/JZlib.class
org/jboss/netty/util/internal/jzlib/StaticTree.class
org/jboss/netty/util/internal/jzlib/Tree.class
org/jboss/netty/util/internal/jzlib/ZStream$1.class
org/jboss/netty/util/internal/jzlib/ZStream.class
{noformat}

Is it an expected behavior?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Release 1.3.0 RC1 (Non voting, testing release candidate)

2017-05-23 Thread Stefan Richter
I have fixes ready for both, FLINK-6690 and FLINK-6685, which I will merge 
tomorrow.

> Am 23.05.2017 um 21:05 schrieb Chesnay Schepler :
> 
> It appears that rescaling is broken, I've filed 
> https://issues.apache.org/jira/browse/FLINK-6690 for that.
> 
> needless to say, this is a release blocker.
> 
> On 23.05.2017 20:55, Robert Metzger wrote:
>> I know I'm talking to myself here :) Anyways, I was running into some
>> issues while creating the release (I was using master instead of the
>> release-1.3 branch, which lead to some issues with the scala 2.10 / 2.11
>> switch).
>> 
>> The RC2 is basically ready, however, there's at least one new blocker:
>> https://issues.apache.org/jira/browse/FLINK-6685 which needs addressing
>> first.
>> 
>> Let me know if you want me to publish the new RC2. Otherwise, I'll re-do it
>> with the fix included.
>> 
>> On Tue, May 23, 2017 at 10:35 AM, Robert Metzger 
>> wrote:
>> 
>>> I've started building the RC.
>>> 
>>> On Mon, May 22, 2017 at 6:01 PM, Robert Metzger 
>>> wrote:
>>> 
 Gordon's PR has been merged. I forgot one blocking issue. Till created a
 PR for it: https://issues.apache.org/jira/browse/FLINK-6328
 Once travis has passed, I'll merge that one and then do the RC.
 
 On Mon, May 22, 2017 at 10:36 AM, Robert Metzger 
 wrote:
 
> Thanks a lot for doing the legal checks for the release.
> 
> I'll create the first voting release candidate once
> https://github.com/apache/flink/pull/3937 is merged.
> 
> On Fri, May 19, 2017 at 4:45 PM, Xiaowei Jiang 
> wrote:
> 
>> Hi Robert,
>> 
>> I did the following checks and found no issues:
>> 
>>   - Check if checksums and GPG files match the corresponding release
>> files
>>   - Verify that the source archives do not contain any binaries
>>   - Check if the source release is building properly with Maven
>> (including
>> license header check and checkstyle). Also the tests should be executed
>> (mvn clean verify).
>>   - Check build for custom Hadoop version (2.3.0, 2.4.1, 2.6.3, 2.7.2)
>>   - Check build for Scala 2.11
>>   - Check that the README.md file is meaningful
>> 
>> thanks
>> Xiaowei
>> 
>> On Fri, May 19, 2017 at 6:29 PM, Chesnay Schepler 
>> wrote:
>> 
>>> Whoops, this is the PR for enabling the test:
>>> https://github.com/apache/flink/pull/3844
>>> 
>>> 
>>> On 19.05.2017 12:14, Robert Metzger wrote:
>>> 
 Thank you for all your input.
 
 @Chesnay, in your email you are pointing to the same PR twice:
 This PR fixes the compilation on Windows:  (reviewed once, most
>> recent
 changes not reviewed)
 https://github.com/apache/flink/pull/3854
 This PR enables a test for savepoint compatibility: (nice to have,
>> easy to
 review)
 https://github.com/apache/flink/pull/3854
 
 Also the "should define more than one task slot" thing is not
>> important
 IMO.
 
 I think the "empty path on windows" thing is not a release blocker.
 
 --
 
 These are the issues mentioned in the thread that are still open and
 blockers:
 - Add nested serializers to config snapshots of composite
>> serializers:
 https://github.com/apache/flink/pull/3937 has no review yet
 - FLINK-6610 
>> WebServer
 could not be created,when set the "jobmanager.web.submit.enable" to
>> false
 - FLINK-6629 
 ClusterClient
 cannot submit jobs to HA cluster if address not set in configuration
 
 
 
 On Fri, May 19, 2017 at 12:17 AM, Till Rohrmann <
>> trohrm...@apache.org>
 wrote:
 
 I might have found another blocker:
> https://issues.apache.org/jira/browse/FLINK-6629.
> 
> The issue is that the ClusterClient only allows to submit jobs to
>> an HA
> cluster if you have specified the JobManager's address in the
> flink-conf.yaml or via the command line options. If no address is
>> set,
> then
> it fails completely. If the wrong address is set, which can easily
>> happen
> in an HA setting, then we are not able to find the proper connecting
> address for the ActorSystem. This basically voids Flink's HA
> capabilities.
> 
> Cheers,
> Till
> 
> On Thu, May 18, 2017 at 10:23 PM, Chesnay Schepler <
>> ches...@apache.org>
> wrote:
> 
> The test document says that the default 

[jira] [Created] (FLINK-6691) Add checkstyle import block rule for scala imports

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6691:
---

 Summary: Add checkstyle import block rule for scala imports
 Key: FLINK-6691
 URL: https://issues.apache.org/jira/browse/FLINK-6691
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


Similar to java and javax imports we should give scala imports a separate 
import block.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Release 1.3.0 RC1 (Non voting, testing release candidate)

2017-05-23 Thread Chesnay Schepler
It appears that rescaling is broken, I've filed 
https://issues.apache.org/jira/browse/FLINK-6690 for that.


needless to say, this is a release blocker.

On 23.05.2017 20:55, Robert Metzger wrote:

I know I'm talking to myself here :) Anyways, I was running into some
issues while creating the release (I was using master instead of the
release-1.3 branch, which lead to some issues with the scala 2.10 / 2.11
switch).

The RC2 is basically ready, however, there's at least one new blocker:
https://issues.apache.org/jira/browse/FLINK-6685 which needs addressing
first.

Let me know if you want me to publish the new RC2. Otherwise, I'll re-do it
with the fix included.

On Tue, May 23, 2017 at 10:35 AM, Robert Metzger 
wrote:


I've started building the RC.

On Mon, May 22, 2017 at 6:01 PM, Robert Metzger 
wrote:


Gordon's PR has been merged. I forgot one blocking issue. Till created a
PR for it: https://issues.apache.org/jira/browse/FLINK-6328
Once travis has passed, I'll merge that one and then do the RC.

On Mon, May 22, 2017 at 10:36 AM, Robert Metzger 
wrote:


Thanks a lot for doing the legal checks for the release.

I'll create the first voting release candidate once
https://github.com/apache/flink/pull/3937 is merged.

On Fri, May 19, 2017 at 4:45 PM, Xiaowei Jiang 
wrote:


Hi Robert,

I did the following checks and found no issues:

   - Check if checksums and GPG files match the corresponding release
files
   - Verify that the source archives do not contain any binaries
   - Check if the source release is building properly with Maven
(including
license header check and checkstyle). Also the tests should be executed
(mvn clean verify).
   - Check build for custom Hadoop version (2.3.0, 2.4.1, 2.6.3, 2.7.2)
   - Check build for Scala 2.11
   - Check that the README.md file is meaningful

thanks
Xiaowei

On Fri, May 19, 2017 at 6:29 PM, Chesnay Schepler 
wrote:


Whoops, this is the PR for enabling the test:
https://github.com/apache/flink/pull/3844


On 19.05.2017 12:14, Robert Metzger wrote:


Thank you for all your input.

@Chesnay, in your email you are pointing to the same PR twice:
This PR fixes the compilation on Windows:  (reviewed once, most

recent

changes not reviewed)
https://github.com/apache/flink/pull/3854
This PR enables a test for savepoint compatibility: (nice to have,

easy to

review)
https://github.com/apache/flink/pull/3854

Also the "should define more than one task slot" thing is not

important

IMO.

I think the "empty path on windows" thing is not a release blocker.

--

These are the issues mentioned in the thread that are still open and
blockers:
- Add nested serializers to config snapshots of composite

serializers:

https://github.com/apache/flink/pull/3937 has no review yet
- FLINK-6610 

WebServer

could not be created,when set the "jobmanager.web.submit.enable" to

false

- FLINK-6629 
ClusterClient
cannot submit jobs to HA cluster if address not set in configuration



On Fri, May 19, 2017 at 12:17 AM, Till Rohrmann <

trohrm...@apache.org>

wrote:

I might have found another blocker:

https://issues.apache.org/jira/browse/FLINK-6629.

The issue is that the ClusterClient only allows to submit jobs to

an HA

cluster if you have specified the JobManager's address in the
flink-conf.yaml or via the command line options. If no address is

set,

then
it fails completely. If the wrong address is set, which can easily

happen

in an HA setting, then we are not able to find the proper connecting
address for the ActorSystem. This basically voids Flink's HA
capabilities.

Cheers,
Till

On Thu, May 18, 2017 at 10:23 PM, Chesnay Schepler <

ches...@apache.org>

wrote:

The test document says that the default flink-conf.yml "should

define

more


than one task slot", but it currently configures exactly 1 task

slot.

Not
sure if it is a typo in the doc though.


On 18.05.2017 22:10, Chesnay Schepler wrote:

The start-cluster.sh script failed for me on Windows when executed

in a

directory containing spaces.

On 18.05.2017 20:47, Chesnay Schepler wrote:

FLINK-6610 should also be fixed; it is currently not possible to
disable

web-submissions.

On 18.05.2017 18:13, jincheng sun wrote:

Hi Robert,

I have some checks to do and some test improve PRs (
https://issues.apache.org/jira/browse/FLINK-6619) need be done

soon.

Best,
SunJincheng

2017-05-18 22:17 GMT+08:00 Greg Hogan :

The following tickets for 1.3.0 have a PR in need of review:


[FLINK-6582] [docs] Project from maven archetype is not

buildable by

default
[FLINK-6616] [docs] Clarify provenance of official Docker

images


On May 18, 2017, at 5:40 AM, Fabian Hueske 


wrote:

I have a couple of PRs ready with bugfixes that 

[jira] [Created] (FLINK-6690) Rescaling broken

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6690:
---

 Summary: Rescaling broken
 Key: FLINK-6690
 URL: https://issues.apache.org/jira/browse/FLINK-6690
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.3.0, 1.4.0
Reporter: Chesnay Schepler
Assignee: Stefan Richter
Priority: Blocker


Rescaling appears to be broken for both 1.3 and 1.4. WHen i tried it out i got 
the following exception:

{code}
java.lang.IllegalStateException: Could not initialize keyed state backend.  
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:321)
   at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:217)
   at 
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:675)
   at 
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:662)
   at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:251) 
  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)   at 
java.lang.Thread.run(Thread.java:745) Caused by: 
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0   at 
java.util.ArrayList.rangeCheck(ArrayList.java:635)   at 
java.util.ArrayList.get(ArrayList.java:411)   at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullRestoreOperation.restoreKVStateData(RocksDBKeyedStateBackend.java:1183)
   at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullRestoreOperation.restoreKeyGroupsInStateHandle(RocksDBKeyedStateBackend.java:1089)
   at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullRestoreOperation.doRestore(RocksDBKeyedStateBackend.java:1070)
   at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.restore(RocksDBKeyedStateBackend.java:957)
   at 
org.apache.flink.streaming.runtime.tasks.StreamTask.createKeyedStateBackend(StreamTask.java:771)
   at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:311)
   ... 6 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Release 1.3.0 RC1 (Non voting, testing release candidate)

2017-05-23 Thread Robert Metzger
I know I'm talking to myself here :) Anyways, I was running into some
issues while creating the release (I was using master instead of the
release-1.3 branch, which lead to some issues with the scala 2.10 / 2.11
switch).

The RC2 is basically ready, however, there's at least one new blocker:
https://issues.apache.org/jira/browse/FLINK-6685 which needs addressing
first.

Let me know if you want me to publish the new RC2. Otherwise, I'll re-do it
with the fix included.

On Tue, May 23, 2017 at 10:35 AM, Robert Metzger 
wrote:

> I've started building the RC.
>
> On Mon, May 22, 2017 at 6:01 PM, Robert Metzger 
> wrote:
>
>> Gordon's PR has been merged. I forgot one blocking issue. Till created a
>> PR for it: https://issues.apache.org/jira/browse/FLINK-6328
>> Once travis has passed, I'll merge that one and then do the RC.
>>
>> On Mon, May 22, 2017 at 10:36 AM, Robert Metzger 
>> wrote:
>>
>>> Thanks a lot for doing the legal checks for the release.
>>>
>>> I'll create the first voting release candidate once
>>> https://github.com/apache/flink/pull/3937 is merged.
>>>
>>> On Fri, May 19, 2017 at 4:45 PM, Xiaowei Jiang 
>>> wrote:
>>>
 Hi Robert,

 I did the following checks and found no issues:

   - Check if checksums and GPG files match the corresponding release
 files
   - Verify that the source archives do not contain any binaries
   - Check if the source release is building properly with Maven
 (including
 license header check and checkstyle). Also the tests should be executed
 (mvn clean verify).
   - Check build for custom Hadoop version (2.3.0, 2.4.1, 2.6.3, 2.7.2)
   - Check build for Scala 2.11
   - Check that the README.md file is meaningful

 thanks
 Xiaowei

 On Fri, May 19, 2017 at 6:29 PM, Chesnay Schepler 
 wrote:

 > Whoops, this is the PR for enabling the test:
 > https://github.com/apache/flink/pull/3844
 >
 >
 > On 19.05.2017 12:14, Robert Metzger wrote:
 >
 >> Thank you for all your input.
 >>
 >> @Chesnay, in your email you are pointing to the same PR twice:
 >> This PR fixes the compilation on Windows:  (reviewed once, most
 recent
 >> changes not reviewed)
 >> https://github.com/apache/flink/pull/3854
 >> This PR enables a test for savepoint compatibility: (nice to have,
 easy to
 >> review)
 >> https://github.com/apache/flink/pull/3854
 >>
 >> Also the "should define more than one task slot" thing is not
 important
 >> IMO.
 >>
 >> I think the "empty path on windows" thing is not a release blocker.
 >>
 >> --
 >>
 >> These are the issues mentioned in the thread that are still open and
 >> blockers:
 >> - Add nested serializers to config snapshots of composite
 serializers:
 >> https://github.com/apache/flink/pull/3937 has no review yet
 >> - FLINK-6610 
 WebServer
 >> could not be created,when set the "jobmanager.web.submit.enable" to
 false
 >> - FLINK-6629 
 >> ClusterClient
 >> cannot submit jobs to HA cluster if address not set in configuration
 >>
 >>
 >>
 >> On Fri, May 19, 2017 at 12:17 AM, Till Rohrmann <
 trohrm...@apache.org>
 >> wrote:
 >>
 >> I might have found another blocker:
 >>> https://issues.apache.org/jira/browse/FLINK-6629.
 >>>
 >>> The issue is that the ClusterClient only allows to submit jobs to
 an HA
 >>> cluster if you have specified the JobManager's address in the
 >>> flink-conf.yaml or via the command line options. If no address is
 set,
 >>> then
 >>> it fails completely. If the wrong address is set, which can easily
 happen
 >>> in an HA setting, then we are not able to find the proper connecting
 >>> address for the ActorSystem. This basically voids Flink's HA
 >>> capabilities.
 >>>
 >>> Cheers,
 >>> Till
 >>>
 >>> On Thu, May 18, 2017 at 10:23 PM, Chesnay Schepler <
 ches...@apache.org>
 >>> wrote:
 >>>
 >>> The test document says that the default flink-conf.yml "should
 define
 
 >>> more
 >>>
  than one task slot", but it currently configures exactly 1 task
 slot.
  Not
  sure if it is a typo in the doc though.
 
 
  On 18.05.2017 22:10, Chesnay Schepler wrote:
 
  The start-cluster.sh script failed for me on Windows when executed
 in a
 > directory containing spaces.
 >
 > On 18.05.2017 20:47, Chesnay Schepler wrote:
 >
 > FLINK-6610 should also be fixed; it is currently not possible to
 >>
 > 

[jira] [Created] (FLINK-6689) Remote StreamExecutionEnvironment fails to submit jobs against LocalFlinkMiniCluster

2017-05-23 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-6689:
--

 Summary: Remote StreamExecutionEnvironment fails to submit jobs 
against LocalFlinkMiniCluster
 Key: FLINK-6689
 URL: https://issues.apache.org/jira/browse/FLINK-6689
 Project: Flink
  Issue Type: Bug
  Components: Job-Submission
Affects Versions: 1.3.0
Reporter: Nico Kruber
 Fix For: 1.3.0


The following Flink programs fails to execute with the current 1.3 branch (1.2 
works):

{code:java}
final String jobManagerAddress = "localhost";
final int jobManagerPort = ConfigConstants.DEFAULT_JOB_MANAGER_IPC_PORT;

final Configuration config = new Configuration();
config.setString(ConfigConstants.JOB_MANAGER_IPC_ADDRESS_KEY, 
jobManagerAddress);
config.setInteger(ConfigConstants.JOB_MANAGER_IPC_PORT_KEY, 
jobManagerPort);
config.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, true);

final LocalFlinkMiniCluster cluster = new LocalFlinkMiniCluster(config, false);
cluster.start(true);

final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.createRemoteEnvironment(jobManagerAddress, 
jobManagerPort);

env.fromElements(1l).addSink(new DiscardingSink());

// fails due to leader session id being wrong:
env.execute("test");
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6688) Activate strict checkstyle in flink-test-utils

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6688:
---

 Summary: Activate strict checkstyle in flink-test-utils
 Key: FLINK-6688
 URL: https://issues.apache.org/jira/browse/FLINK-6688
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6687:
---

 Summary: Activate strict checkstyle for flink-runtime-web
 Key: FLINK-6687
 URL: https://issues.apache.org/jira/browse/FLINK-6687
 Project: Flink
  Issue Type: Improvement
  Components: Webfrontend
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6686) Improve UDXF(UDF,UDTF,UDAF) test case

2017-05-23 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6686:
--

 Summary: Improve UDXF(UDF,UDTF,UDAF) test case
 Key: FLINK-6686
 URL: https://issues.apache.org/jira/browse/FLINK-6686
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.3.0
Reporter: sunjincheng
Assignee: sunjincheng


1. Add Check that UDF, UDTF, and UDAF are working properly in group-windows and 
over-windows.
2. Add Check that all built-in Agg on Batch and Stream are working properly.
Let types such as Timestamp, BigDecimal or Pojo flow through UDF. UDTF, UDAF 
(input and output types)




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Reorganize Table API / SQL documentation

2017-05-23 Thread Fabian Hueske
Hi everybody,

I pushed the branch to the ASF Flink repository as a feature branch to keep
all PRs in one place:

-->  https://github.com/apache/flink/tree/tableDocs

Thanks,
Fabian

2017-05-23 16:25 GMT+01:00 Fabian Hueske :

> Hi everybody,
>
> I prepared a branch that creates the proposed structure and copied the
> existing documentation into the corresponding pages / sections.
> There are plenty of gaps that need to be filled or reworked.
>
> --> https://github.com/fhueske/flink/tree/tableDocs
>
> How do we go on from here?
> I think the easiest would be if everybody who's interested in working on
> the documentation picks a page and prepares a PR against my branch (we
> could also push this into a feature branch in the Flink repository if
> somebody prefers that). The PRs are cross-checked and we merge everything
> into to the master when the docs are ready.
>
> Any opinions or other proposals?
>
> Cheers, Fabian
>
> 2017-05-23 10:31 GMT+01:00 Fabian Hueske :
>
>> Hi everybody,
>>
>> Thanks for the feedback. I'll go ahead and create the proposed structure
>> and move the content of the existing docs with comments of what needs to be
>> adapted.
>> I'll put this into branch of my Github repo and let you know when I'm
>> done.
>> From there, we can distribute working on the missing parts / parts that
>> need adaption.
>>
>> Cheers, Fabian
>>
>> 2017-05-19 9:44 GMT+01:00 jincheng sun :
>>
>>> Hi, Fabian,
>>>
>>>   Thanks for the sketch. The structure is pretty well to me, And glad to
>>> join in the discussion in google doc.
>>>
>>> Cheers,
>>> SunJincheng
>>>
>>> 2017-05-19 14:55 GMT+08:00 Shaoxuan Wang :
>>>
>>> > Hello Fabian,
>>> > Thanks for drafting the proposal. I like the entire organization in
>>> general
>>> > and left a few comments. I think this will be a very good kick off to
>>> > reorganize the tableAPI doc.
>>> >
>>> > -shaoxuan
>>> >
>>> > On Fri, May 19, 2017 at 7:06 AM, Fabian Hueske 
>>> wrote:
>>> >
>>> > > Hi everybody,
>>> > >
>>> > > I came up with a proposal for the structure of the Table API / SQL
>>> > > documentation:
>>> > >
>>> > > https://docs.google.com/document/d/1ENY8tcPadZjoZ4AQ_
>>> > > lRRwWiVpScDkm_4rgxIGWGT5E0/edit?usp=sharing
>>> > >
>>> > > Feedback and comments are very welcome.
>>> > > Once we agree on a structure, we can create skeletons and distribute
>>> the
>>> > > work.
>>> > >
>>> > > Cheers,
>>> > > Fabian
>>> > >
>>> > > 2017-05-18 21:01 GMT+02:00 Haohui Mai :
>>> > >
>>> > > > +1
>>> > > >
>>> > > > The Table / SQL component has made significant progress in the
>>> last few
>>> > > > months (kudos to all contributors).
>>> > > >
>>> > > > It is a good time to have a documentation to reflect all the
>>> changes in
>>> > > the
>>> > > > Table / SQL side.
>>> > > >
>>> > > >
>>> > > >
>>> > > > On Thu, May 18, 2017 at 8:12 AM Robert Metzger <
>>> rmetz...@apache.org>
>>> > > > wrote:
>>> > > >
>>> > > > > Thank you Fabian for working on the proposal.
>>> > > > >
>>> > > > > On Thu, May 18, 2017 at 3:51 PM, Fabian Hueske <
>>> fhue...@gmail.com>
>>> > > > wrote:
>>> > > > >
>>> > > > > > Thanks for starting this discussion Robert.
>>> > > > > >
>>> > > > > > I think with the next release the Table API / SQL should be
>>> moved
>>> > up
>>> > > in
>>> > > > > the
>>> > > > > > Application Development menu.
>>> > > > > > I also though about restructuring the docs, but it won't be
>>> trivial
>>> > > to
>>> > > > do
>>> > > > > > this, IMO because there are many orthogonal aspects:
>>> > > > > > - Stream/Batch
>>> > > > > > - Table/SQL
>>> > > > > > - Scala/Java
>>> > > > > >
>>> > > > > > and sometimes also common concepts.
>>> > > > > > At the moment there are also many new features missing like
>>> OVER
>>> > > > windows,
>>> > > > > > UDAGGs, retraction, StreamTableSinks, time indicator
>>> attributes,
>>> > > filter
>>> > > > > > pushdown, ...
>>> > > > > >
>>> > > > > > I will try to sketch a new structure in a Google Doc in the
>>> next
>>> > days
>>> > > > and
>>> > > > > > share it in this thread.
>>> > > > > >
>>> > > > > > Cheers, Fabian
>>> > > > > >
>>> > > > > > 2017-05-18 14:03 GMT+02:00 Kostas Kloudas <
>>> > > k.klou...@data-artisans.com
>>> > > > >:
>>> > > > > >
>>> > > > > > > A big +1 as well.
>>> > > > > > >
>>> > > > > > > > On May 18, 2017, at 1:55 PM, Ufuk Celebi 
>>> > wrote:
>>> > > > > > > >
>>> > > > > > > > On Thu, May 18, 2017 at 1:52 PM, Till Rohrmann <
>>> > > > trohrm...@apache.org
>>> > > > > >
>>> > > > > > > wrote:
>>> > > > > > > >> I think we have a history of creating too long monolithic
>>> > > > > > documentation
>>> > > > > > > >> pages which are hard to digest. So a big +1 for splitting
>>> the
>>> > > > Table
>>> > > > > > > API/SQL
>>> > > > > > > >> documentation up into more easily digestible pieces.
>>> > > > > > > >
>>> > > > > > > > +1
>>> > > > > > > >
>>> > > > > > > > 

Re: [DISCUSS] Reorganize Table API / SQL documentation

2017-05-23 Thread Fabian Hueske
Hi everybody,

I prepared a branch that creates the proposed structure and copied the
existing documentation into the corresponding pages / sections.
There are plenty of gaps that need to be filled or reworked.

--> https://github.com/fhueske/flink/tree/tableDocs

How do we go on from here?
I think the easiest would be if everybody who's interested in working on
the documentation picks a page and prepares a PR against my branch (we
could also push this into a feature branch in the Flink repository if
somebody prefers that). The PRs are cross-checked and we merge everything
into to the master when the docs are ready.

Any opinions or other proposals?

Cheers, Fabian

2017-05-23 10:31 GMT+01:00 Fabian Hueske :

> Hi everybody,
>
> Thanks for the feedback. I'll go ahead and create the proposed structure
> and move the content of the existing docs with comments of what needs to be
> adapted.
> I'll put this into branch of my Github repo and let you know when I'm done.
> From there, we can distribute working on the missing parts / parts that
> need adaption.
>
> Cheers, Fabian
>
> 2017-05-19 9:44 GMT+01:00 jincheng sun :
>
>> Hi, Fabian,
>>
>>   Thanks for the sketch. The structure is pretty well to me, And glad to
>> join in the discussion in google doc.
>>
>> Cheers,
>> SunJincheng
>>
>> 2017-05-19 14:55 GMT+08:00 Shaoxuan Wang :
>>
>> > Hello Fabian,
>> > Thanks for drafting the proposal. I like the entire organization in
>> general
>> > and left a few comments. I think this will be a very good kick off to
>> > reorganize the tableAPI doc.
>> >
>> > -shaoxuan
>> >
>> > On Fri, May 19, 2017 at 7:06 AM, Fabian Hueske 
>> wrote:
>> >
>> > > Hi everybody,
>> > >
>> > > I came up with a proposal for the structure of the Table API / SQL
>> > > documentation:
>> > >
>> > > https://docs.google.com/document/d/1ENY8tcPadZjoZ4AQ_
>> > > lRRwWiVpScDkm_4rgxIGWGT5E0/edit?usp=sharing
>> > >
>> > > Feedback and comments are very welcome.
>> > > Once we agree on a structure, we can create skeletons and distribute
>> the
>> > > work.
>> > >
>> > > Cheers,
>> > > Fabian
>> > >
>> > > 2017-05-18 21:01 GMT+02:00 Haohui Mai :
>> > >
>> > > > +1
>> > > >
>> > > > The Table / SQL component has made significant progress in the last
>> few
>> > > > months (kudos to all contributors).
>> > > >
>> > > > It is a good time to have a documentation to reflect all the
>> changes in
>> > > the
>> > > > Table / SQL side.
>> > > >
>> > > >
>> > > >
>> > > > On Thu, May 18, 2017 at 8:12 AM Robert Metzger > >
>> > > > wrote:
>> > > >
>> > > > > Thank you Fabian for working on the proposal.
>> > > > >
>> > > > > On Thu, May 18, 2017 at 3:51 PM, Fabian Hueske > >
>> > > > wrote:
>> > > > >
>> > > > > > Thanks for starting this discussion Robert.
>> > > > > >
>> > > > > > I think with the next release the Table API / SQL should be
>> moved
>> > up
>> > > in
>> > > > > the
>> > > > > > Application Development menu.
>> > > > > > I also though about restructuring the docs, but it won't be
>> trivial
>> > > to
>> > > > do
>> > > > > > this, IMO because there are many orthogonal aspects:
>> > > > > > - Stream/Batch
>> > > > > > - Table/SQL
>> > > > > > - Scala/Java
>> > > > > >
>> > > > > > and sometimes also common concepts.
>> > > > > > At the moment there are also many new features missing like OVER
>> > > > windows,
>> > > > > > UDAGGs, retraction, StreamTableSinks, time indicator attributes,
>> > > filter
>> > > > > > pushdown, ...
>> > > > > >
>> > > > > > I will try to sketch a new structure in a Google Doc in the next
>> > days
>> > > > and
>> > > > > > share it in this thread.
>> > > > > >
>> > > > > > Cheers, Fabian
>> > > > > >
>> > > > > > 2017-05-18 14:03 GMT+02:00 Kostas Kloudas <
>> > > k.klou...@data-artisans.com
>> > > > >:
>> > > > > >
>> > > > > > > A big +1 as well.
>> > > > > > >
>> > > > > > > > On May 18, 2017, at 1:55 PM, Ufuk Celebi 
>> > wrote:
>> > > > > > > >
>> > > > > > > > On Thu, May 18, 2017 at 1:52 PM, Till Rohrmann <
>> > > > trohrm...@apache.org
>> > > > > >
>> > > > > > > wrote:
>> > > > > > > >> I think we have a history of creating too long monolithic
>> > > > > > documentation
>> > > > > > > >> pages which are hard to digest. So a big +1 for splitting
>> the
>> > > > Table
>> > > > > > > API/SQL
>> > > > > > > >> documentation up into more easily digestible pieces.
>> > > > > > > >
>> > > > > > > > +1
>> > > > > > > >
>> > > > > > > > Thanks for bringing it up
>> > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>
>


[jira] [Created] (FLINK-6685) SafetyNetCloseableRegistry is closed prematurely in Task::triggerCheckpointBarrier

2017-05-23 Thread Stefan Richter (JIRA)
Stefan Richter created FLINK-6685:
-

 Summary: SafetyNetCloseableRegistry is closed prematurely in 
Task::triggerCheckpointBarrier
 Key: FLINK-6685
 URL: https://issues.apache.org/jira/browse/FLINK-6685
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Stefan Richter
Assignee: Stefan Richter
Priority: Blocker


The {{SafetyNetCloseableRegistry}} is closed to early in 
{{triggerCheckpointBarrier(...)}}. Right now, it seems like the code assumes 
that {{statefulTask.triggerCheckpoint(...)}} is blocking - which it is not. 
Like this, the registry can be closed while the checkpoint is still running.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6684) Remove AsyncCheckpointRunnable from StreamTask

2017-05-23 Thread Stefan Richter (JIRA)
Stefan Richter created FLINK-6684:
-

 Summary: Remove AsyncCheckpointRunnable from StreamTask
 Key: FLINK-6684
 URL: https://issues.apache.org/jira/browse/FLINK-6684
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Reporter: Stefan Richter


Right now, {{StreamTask}} executes {{AsyncCheckpointRunnable}} to run the async 
part of a snapshot. However, it seems that currently the main reason for 
executing this code in a separate tread is to avoid its execution under the 
checkpoint lock, so that processing can proceed.

Actually, the  checkpoint is already triggered asynchronously, in 
{{Task::triggerCheckpointBarrier}}. We could also execute the checkpointing 
without executing {{AsyncCheckpointRunnable}}, by just running the code inside 
the thread that is spawned in {{Task::triggerCheckpointBarrier}}. We could 
simply

1) Run the synchronous part of the checkpoint under the checkpointing lock.
2) Run the asynchronous part of the checkpoint without holding the 
checkpointing lock.
3) Returning a {{Future}} from {{StatefulTask::triggerCheckpoint}} when called 
from {{Task::triggerCheckpointBarrier}}.

This would simplify the code and make the usage of the 
{{SafetyNetCloseableRegistry}} possible as intended. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6683) building with Scala 2.11 no longer uses change-scala-version.sh

2017-05-23 Thread David Anderson (JIRA)
David Anderson created FLINK-6683:
-

 Summary: building with Scala 2.11 no longer uses 
change-scala-version.sh
 Key: FLINK-6683
 URL: https://issues.apache.org/jira/browse/FLINK-6683
 Project: Flink
  Issue Type: Sub-task
  Components: Build System, Documentation
Affects Versions: 1.3.0
Reporter: David Anderson
 Fix For: 1.3.0


FLINK-6414 eliminated change-scala-version.sh. The documentation 
(setup/building.html) needs to be updated to match.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6682) Improve error message in case parallelism exceeds maxParallelism

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6682:
---

 Summary: Improve error message in case parallelism exceeds 
maxParallelism
 Key: FLINK-6682
 URL: https://issues.apache.org/jira/browse/FLINK-6682
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Affects Versions: 1.3.0, 1.4.0
Reporter: Chesnay Schepler


When restoring a job with a parallelism that exceeds the maxParallelism we're 
not providing a useful error message, as all you get is an 
IllegalArgumentException:

{code}
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution 
failed
at 
org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:343)
at 
org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
... 22 more
Caused by: java.lang.IllegalArgumentException
at 
org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:123)
at 
org.apache.flink.runtime.checkpoint.StateAssignmentOperation.createKeyGroupPartitions(StateAssignmentOperation.java:449)
at 
org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignAttemptState(StateAssignmentOperation.java:117)
at 
org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignStates(StateAssignmentOperation.java:102)
at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedState(CheckpointCoordinator.java:1038)
at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1101)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:1386)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6680) App & Flink migration guide: updates for the 1.3 release

2017-05-23 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-6680:
--

 Summary: App & Flink migration guide: updates for the 1.3 release
 Key: FLINK-6680
 URL: https://issues.apache.org/jira/browse/FLINK-6680
 Project: Flink
  Issue Type: Sub-task
Reporter: Nico Kruber


The "Upgrading Applications and Flink Versions" at {{docs/ops/upgrading.md}} 
does not contain any info on Flink 1.3 yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6681) Update "Upgrading the Flink Framework Version" section for 1.2 -> 1.3

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6681:
---

 Summary: Update "Upgrading the Flink Framework Version" section 
for 1.2 -> 1.3
 Key: FLINK-6681
 URL: https://issues.apache.org/jira/browse/FLINK-6681
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: Chesnay Schepler






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6679) Document HeapStatebackend

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6679:
---

 Summary: Document HeapStatebackend
 Key: FLINK-6679
 URL: https://issues.apache.org/jira/browse/FLINK-6679
 Project: Flink
  Issue Type: Sub-task
  Components: State Backends, Checkpointing
Reporter: Chesnay Schepler






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6678) Migration guide: add note about removed log4j default logger from core artefacts

2017-05-23 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-6678:
--

 Summary: Migration guide: add note about removed log4j default 
logger from core artefacts
 Key: FLINK-6678
 URL: https://issues.apache.org/jira/browse/FLINK-6678
 Project: Flink
  Issue Type: Sub-task
  Components: Build System, Documentation
Affects Versions: 1.3.0
Reporter: Nico Kruber
 Fix For: 1.3.0


The migration guide at {{docs/dev/migration.md}} needs to be extended with some 
notes about the removed specific logger dependencies in the Flink core 
artefacts (FLINK-6415).

This is valid for applications embedding flink. Examples and quickstarts have 
been adding their loggers already but other projects may need to add those. In 
maven's {{pom.xml}}, for example, by adding the following dependencies

{code:xml}

org.slf4j
slf4j-log4j12
1.7.7



log4j
log4j
1.2.17

{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6677) Add Table API changes to the migration guide

2017-05-23 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-6677:
--

 Summary: Add Table API changes to the migration guide
 Key: FLINK-6677
 URL: https://issues.apache.org/jira/browse/FLINK-6677
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table API & SQL
Affects Versions: 1.3.0
Reporter: Nico Kruber
 Fix For: 1.3.0


The migration guide at {{docs/dev/migration.md}} needs to be extended with some 
notes about the API changes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6676) Add QueryableStateClient changed to the migration guide

2017-05-23 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-6676:
--

 Summary: Add QueryableStateClient changed to the migration guide
 Key: FLINK-6676
 URL: https://issues.apache.org/jira/browse/FLINK-6676
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Queryable State
Affects Versions: 1.3.0
Reporter: Nico Kruber
 Fix For: 1.3.0


The migration guide at {{docs/dev/migration.md}} needs to be extended with some 
notes about the API changes:
* changes in the constructor
* more?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6675) Activate strict checkstyle for flink-annotations

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6675:
---

 Summary: Activate strict checkstyle for flink-annotations
 Key: FLINK-6675
 URL: https://issues.apache.org/jira/browse/FLINK-6675
 Project: Flink
  Issue Type: Improvement
  Components: Core
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Trivial
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6674) Update release 1.3 docs

2017-05-23 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-6674:
--

 Summary: Update release 1.3 docs
 Key: FLINK-6674
 URL: https://issues.apache.org/jira/browse/FLINK-6674
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.3.0
Reporter: Nico Kruber
 Fix For: 1.3.0


Umbrella issue to track required updates to the documentation for the 1.3 
release.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6673) Yarn single-job submission doesn't fail early when to few slows are available

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6673:
---

 Summary: Yarn single-job submission doesn't fail early when to few 
slows are available
 Key: FLINK-6673
 URL: https://issues.apache.org/jira/browse/FLINK-6673
 Project: Flink
  Issue Type: Bug
  Components: Job-Submission, YARN
Affects Versions: 1.3.0
Reporter: Chesnay Schepler


When submitting a single job to yarn that requires more slots than are 
available (in total) the job ends up in a restart-loop, instead of being 
canceled right away.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6672) Support CAST(timestamp AS BIGINT)

2017-05-23 Thread Timo Walther (JIRA)
Timo Walther created FLINK-6672:
---

 Summary: Support CAST(timestamp AS BIGINT)
 Key: FLINK-6672
 URL: https://issues.apache.org/jira/browse/FLINK-6672
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Timo Walther


It is not possible to cast a TIMESTAMP, TIME, or DATE to BIGINT, INT, INT in 
SQL. The Table API and the code generation support this, but the SQL validation 
seems to prohibit it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Reorganize Table API / SQL documentation

2017-05-23 Thread Fabian Hueske
Hi everybody,

Thanks for the feedback. I'll go ahead and create the proposed structure
and move the content of the existing docs with comments of what needs to be
adapted.
I'll put this into branch of my Github repo and let you know when I'm done.
>From there, we can distribute working on the missing parts / parts that
need adaption.

Cheers, Fabian

2017-05-19 9:44 GMT+01:00 jincheng sun :

> Hi, Fabian,
>
>   Thanks for the sketch. The structure is pretty well to me, And glad to
> join in the discussion in google doc.
>
> Cheers,
> SunJincheng
>
> 2017-05-19 14:55 GMT+08:00 Shaoxuan Wang :
>
> > Hello Fabian,
> > Thanks for drafting the proposal. I like the entire organization in
> general
> > and left a few comments. I think this will be a very good kick off to
> > reorganize the tableAPI doc.
> >
> > -shaoxuan
> >
> > On Fri, May 19, 2017 at 7:06 AM, Fabian Hueske 
> wrote:
> >
> > > Hi everybody,
> > >
> > > I came up with a proposal for the structure of the Table API / SQL
> > > documentation:
> > >
> > > https://docs.google.com/document/d/1ENY8tcPadZjoZ4AQ_
> > > lRRwWiVpScDkm_4rgxIGWGT5E0/edit?usp=sharing
> > >
> > > Feedback and comments are very welcome.
> > > Once we agree on a structure, we can create skeletons and distribute
> the
> > > work.
> > >
> > > Cheers,
> > > Fabian
> > >
> > > 2017-05-18 21:01 GMT+02:00 Haohui Mai :
> > >
> > > > +1
> > > >
> > > > The Table / SQL component has made significant progress in the last
> few
> > > > months (kudos to all contributors).
> > > >
> > > > It is a good time to have a documentation to reflect all the changes
> in
> > > the
> > > > Table / SQL side.
> > > >
> > > >
> > > >
> > > > On Thu, May 18, 2017 at 8:12 AM Robert Metzger 
> > > > wrote:
> > > >
> > > > > Thank you Fabian for working on the proposal.
> > > > >
> > > > > On Thu, May 18, 2017 at 3:51 PM, Fabian Hueske 
> > > > wrote:
> > > > >
> > > > > > Thanks for starting this discussion Robert.
> > > > > >
> > > > > > I think with the next release the Table API / SQL should be moved
> > up
> > > in
> > > > > the
> > > > > > Application Development menu.
> > > > > > I also though about restructuring the docs, but it won't be
> trivial
> > > to
> > > > do
> > > > > > this, IMO because there are many orthogonal aspects:
> > > > > > - Stream/Batch
> > > > > > - Table/SQL
> > > > > > - Scala/Java
> > > > > >
> > > > > > and sometimes also common concepts.
> > > > > > At the moment there are also many new features missing like OVER
> > > > windows,
> > > > > > UDAGGs, retraction, StreamTableSinks, time indicator attributes,
> > > filter
> > > > > > pushdown, ...
> > > > > >
> > > > > > I will try to sketch a new structure in a Google Doc in the next
> > days
> > > > and
> > > > > > share it in this thread.
> > > > > >
> > > > > > Cheers, Fabian
> > > > > >
> > > > > > 2017-05-18 14:03 GMT+02:00 Kostas Kloudas <
> > > k.klou...@data-artisans.com
> > > > >:
> > > > > >
> > > > > > > A big +1 as well.
> > > > > > >
> > > > > > > > On May 18, 2017, at 1:55 PM, Ufuk Celebi 
> > wrote:
> > > > > > > >
> > > > > > > > On Thu, May 18, 2017 at 1:52 PM, Till Rohrmann <
> > > > trohrm...@apache.org
> > > > > >
> > > > > > > wrote:
> > > > > > > >> I think we have a history of creating too long monolithic
> > > > > > documentation
> > > > > > > >> pages which are hard to digest. So a big +1 for splitting
> the
> > > > Table
> > > > > > > API/SQL
> > > > > > > >> documentation up into more easily digestible pieces.
> > > > > > > >
> > > > > > > > +1
> > > > > > > >
> > > > > > > > Thanks for bringing it up
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (FLINK-6671) RocksDBStateBackendTest.testCancelRunningSnapshot unstable

2017-05-23 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-6671:


 Summary: RocksDBStateBackendTest.testCancelRunningSnapshot unstable
 Key: FLINK-6671
 URL: https://issues.apache.org/jira/browse/FLINK-6671
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.3.0, 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
Priority: Critical
 Fix For: 1.3.0, 1.4.0


The {{RocksDBStateBackendTest.testCancelRunningSnapshot}} is unstable [1]. The 
problem is that the test does not wait for the snapshotting thread to finish 
(join) before verifying the the executed calls on the {{RocksObjects}}.

[1] https://s3.amazonaws.com/archive.travis-ci.org/jobs/235106659/log.txt 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Release 1.3.0 RC1 (Non voting, testing release candidate)

2017-05-23 Thread Robert Metzger
I've started building the RC.

On Mon, May 22, 2017 at 6:01 PM, Robert Metzger  wrote:

> Gordon's PR has been merged. I forgot one blocking issue. Till created a
> PR for it: https://issues.apache.org/jira/browse/FLINK-6328
> Once travis has passed, I'll merge that one and then do the RC.
>
> On Mon, May 22, 2017 at 10:36 AM, Robert Metzger 
> wrote:
>
>> Thanks a lot for doing the legal checks for the release.
>>
>> I'll create the first voting release candidate once
>> https://github.com/apache/flink/pull/3937 is merged.
>>
>> On Fri, May 19, 2017 at 4:45 PM, Xiaowei Jiang 
>> wrote:
>>
>>> Hi Robert,
>>>
>>> I did the following checks and found no issues:
>>>
>>>   - Check if checksums and GPG files match the corresponding release
>>> files
>>>   - Verify that the source archives do not contain any binaries
>>>   - Check if the source release is building properly with Maven
>>> (including
>>> license header check and checkstyle). Also the tests should be executed
>>> (mvn clean verify).
>>>   - Check build for custom Hadoop version (2.3.0, 2.4.1, 2.6.3, 2.7.2)
>>>   - Check build for Scala 2.11
>>>   - Check that the README.md file is meaningful
>>>
>>> thanks
>>> Xiaowei
>>>
>>> On Fri, May 19, 2017 at 6:29 PM, Chesnay Schepler 
>>> wrote:
>>>
>>> > Whoops, this is the PR for enabling the test:
>>> > https://github.com/apache/flink/pull/3844
>>> >
>>> >
>>> > On 19.05.2017 12:14, Robert Metzger wrote:
>>> >
>>> >> Thank you for all your input.
>>> >>
>>> >> @Chesnay, in your email you are pointing to the same PR twice:
>>> >> This PR fixes the compilation on Windows:  (reviewed once, most recent
>>> >> changes not reviewed)
>>> >> https://github.com/apache/flink/pull/3854
>>> >> This PR enables a test for savepoint compatibility: (nice to have,
>>> easy to
>>> >> review)
>>> >> https://github.com/apache/flink/pull/3854
>>> >>
>>> >> Also the "should define more than one task slot" thing is not
>>> important
>>> >> IMO.
>>> >>
>>> >> I think the "empty path on windows" thing is not a release blocker.
>>> >>
>>> >> --
>>> >>
>>> >> These are the issues mentioned in the thread that are still open and
>>> >> blockers:
>>> >> - Add nested serializers to config snapshots of composite serializers:
>>> >> https://github.com/apache/flink/pull/3937 has no review yet
>>> >> - FLINK-6610 
>>> WebServer
>>> >> could not be created,when set the "jobmanager.web.submit.enable" to
>>> false
>>> >> - FLINK-6629 
>>> >> ClusterClient
>>> >> cannot submit jobs to HA cluster if address not set in configuration
>>> >>
>>> >>
>>> >>
>>> >> On Fri, May 19, 2017 at 12:17 AM, Till Rohrmann >> >
>>> >> wrote:
>>> >>
>>> >> I might have found another blocker:
>>> >>> https://issues.apache.org/jira/browse/FLINK-6629.
>>> >>>
>>> >>> The issue is that the ClusterClient only allows to submit jobs to an
>>> HA
>>> >>> cluster if you have specified the JobManager's address in the
>>> >>> flink-conf.yaml or via the command line options. If no address is
>>> set,
>>> >>> then
>>> >>> it fails completely. If the wrong address is set, which can easily
>>> happen
>>> >>> in an HA setting, then we are not able to find the proper connecting
>>> >>> address for the ActorSystem. This basically voids Flink's HA
>>> >>> capabilities.
>>> >>>
>>> >>> Cheers,
>>> >>> Till
>>> >>>
>>> >>> On Thu, May 18, 2017 at 10:23 PM, Chesnay Schepler <
>>> ches...@apache.org>
>>> >>> wrote:
>>> >>>
>>> >>> The test document says that the default flink-conf.yml "should define
>>> 
>>> >>> more
>>> >>>
>>>  than one task slot", but it currently configures exactly 1 task
>>> slot.
>>>  Not
>>>  sure if it is a typo in the doc though.
>>> 
>>> 
>>>  On 18.05.2017 22:10, Chesnay Schepler wrote:
>>> 
>>>  The start-cluster.sh script failed for me on Windows when executed
>>> in a
>>> > directory containing spaces.
>>> >
>>> > On 18.05.2017 20:47, Chesnay Schepler wrote:
>>> >
>>> > FLINK-6610 should also be fixed; it is currently not possible to
>>> >>
>>> > disable
>>> >>>
>>>  web-submissions.
>>> >>
>>> >> On 18.05.2017 18:13, jincheng sun wrote:
>>> >>
>>> >> Hi Robert,
>>> >>> I have some checks to do and some test improve PRs (
>>> >>> https://issues.apache.org/jira/browse/FLINK-6619) need be done
>>> soon.
>>> >>>
>>> >>> Best,
>>> >>> SunJincheng
>>> >>>
>>> >>> 2017-05-18 22:17 GMT+08:00 Greg Hogan :
>>> >>>
>>> >>> The following tickets for 1.3.0 have a PR in need of review:
>>> >>>
>>>  [FLINK-6582] [docs] Project from maven archetype is not
>>> buildable by
>>>  default
>>>  [FLINK-6616] [docs] Clarify provenance of official 

[jira] [Created] (FLINK-6670) remove CommonTestUtils.createTempDirectory()

2017-05-23 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-6670:
--

 Summary: remove CommonTestUtils.createTempDirectory()
 Key: FLINK-6670
 URL: https://issues.apache.org/jira/browse/FLINK-6670
 Project: Flink
  Issue Type: Bug
  Components: Tests
Reporter: Nico Kruber
Priority: Minor


{{CommonTestUtils.createTempDirectory()}} encourages a dangerous design pattern 
with potential concurrency issues in the unit tests which could be solved by 
using the following pattern instead.
{code:java}
@Rule
public TemporaryFolder tempFolder = new TemporaryFolder();
{code}

We should therefore remove {{CommonTestUtils.createTempDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)