[jira] [Created] (FLINK-12827) Extract Optimizer from Stream/BatchTableEnvImpl

2019-06-13 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12827:


 Summary: Extract Optimizer from Stream/BatchTableEnvImpl
 Key: FLINK-12827
 URL: https://issues.apache.org/jira/browse/FLINK-12827
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Legacy Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.9.0


We could extract the optimization part from Table environments to a separate 
class.
The only method of this class would be {{optimize(RelNode, /*additional 
args*/): RelNode}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12825) Remove deprecation from BatchTableSource/Sink

2019-06-12 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12825:


 Summary: Remove deprecation from BatchTableSource/Sink
 Key: FLINK-12825
 URL: https://issues.apache.org/jira/browse/FLINK-12825
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.9.0
Reporter: Dawid Wysakowicz


In commits: 4946cf4d6ab0f502f065825e157c7cabde604fe1 and 
851d27bb54596d1044a831160b09aad9f3b32316 a {{Deprecate}} annotations were added 
to {{BatchTableSink}} and {{BatchTableSource}}. This is incorrect. They are 
still valid and the only supported interfaces for legacy planner. We cannot 
deprecate them until we deprecate the whole planner and I am not aware of such 
plans for 1.9.

Also the deprecation comment is very misleading as you cannot use the 
Output/Input format versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12799) Improve expression based TableSchema extraction from DataStream/DataSet

2019-06-11 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12799:


 Summary: Improve expression based TableSchema extraction from 
DataStream/DataSet
 Key: FLINK-12799
 URL: https://issues.apache.org/jira/browse/FLINK-12799
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.9.0


We should improve the extraction of {{TableSchema}} from 
{{DataStream/DataSet}}. Currently it is split into two stages:
# We extract types ignoring time attributes via {{FieldInfoUtils#getFieldInfo}}
# We extract rowtime and proctime positions via 
{{StreamTableEnvImpl#validateAndExtractTimeAttributes}}
# We adjust indices from #1 using information from #2

All that could happen in a single pass. This will also deal with 
porting/removing few methods from {{StreamTableEnvImpl}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12798) Port TableEnvironment to flink-api modules

2019-06-11 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12798:


 Summary: Port TableEnvironment to flink-api modules
 Key: FLINK-12798
 URL: https://issues.apache.org/jira/browse/FLINK-12798
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.9.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12773) Unstable kafka e2e test

2019-06-06 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12773:


 Summary: Unstable kafka e2e test
 Key: FLINK-12773
 URL: https://issues.apache.org/jira/browse/FLINK-12773
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Affects Versions: 1.9.0
Reporter: Dawid Wysakowicz


'Kafka 0.10 end-to-end test' fails on Travis occasionally because of corrupted 
downloaded kafka archive.

https://api.travis-ci.org/v3/job/542507472/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12737) Loosen Table dependencies

2019-06-05 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12737:


 Summary: Loosen Table dependencies
 Key: FLINK-12737
 URL: https://issues.apache.org/jira/browse/FLINK-12737
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API, Table SQL / Legacy Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.9.0


The aim of this task is to remove dependencies on actual implementation of 
{{TableEnvironment}}. This includes:

* drop generating unique attribute generation (it sufficient to index 
aggregates within a single operation)
* make the transformation from {{Table}} on the caller side rather than on the 
callee (a.k.a remove getRelNode)
* Add {{insertInto}} method to {{TableEnvironment}}

Additionally move the TemporalTableFunctionImpl to api-java module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12690) Introduce Planner interface

2019-05-31 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12690:


 Summary: Introduce Planner interface
 Key: FLINK-12690
 URL: https://issues.apache.org/jira/browse/FLINK-12690
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.9.0


The planner interface is the bridge between base API and different planner 
modules. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12604) Register TableSource/Sink as CatalogTable

2019-05-23 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12604:


 Summary: Register TableSource/Sink as CatalogTable
 Key: FLINK-12604
 URL: https://issues.apache.org/jira/browse/FLINK-12604
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API, Table SQL / Legacy Planner
Affects Versions: 1.9.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.9.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12601) Register DataStream/DataSet as DataStream/SetTableOperations in Catalog

2019-05-23 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12601:


 Summary: Register DataStream/DataSet as 
DataStream/SetTableOperations in Catalog
 Key: FLINK-12601
 URL: https://issues.apache.org/jira/browse/FLINK-12601
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API, Table SQL / Legacy Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12595) KinesisDataFetcherTest.testOriginalExceptionIsPreservedWhenInterruptedDuringShutdown deadlocks

2019-05-22 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12595:


 Summary: 
KinesisDataFetcherTest.testOriginalExceptionIsPreservedWhenInterruptedDuringShutdown
 deadlocks
 Key: FLINK-12595
 URL: https://issues.apache.org/jira/browse/FLINK-12595
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis, Tests
Affects Versions: 1.9.0
Reporter: Dawid Wysakowicz


https://api.travis-ci.org/v3/job/535738122/log.txt





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12431) Port utility methods for extracting fields information from TypeInformation

2019-05-07 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12431:


 Summary: Port utility methods for extracting fields information 
from TypeInformation
 Key: FLINK-12431
 URL: https://issues.apache.org/jira/browse/FLINK-12431
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


We need those methods in the api-module in order to create {{Table}} out of 
{{DataSet/Stream}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12411) Fix failling e2e test test_streaming_sql.sh

2019-05-06 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12411:


 Summary: Fix failling e2e test test_streaming_sql.sh
 Key: FLINK-12411
 URL: https://issues.apache.org/jira/browse/FLINK-12411
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Legacy Planner, Tests
Affects Versions: 1.9.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12298) Make column functions accept custom

2019-04-23 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12298:


 Summary: Make column functions accept custom 
 Key: FLINK-12298
 URL: https://issues.apache.org/jira/browse/FLINK-12298
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12297) We should clean the closure for OutputTags

2019-04-22 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12297:


 Summary: We should clean the closure for OutputTags
 Key: FLINK-12297
 URL: https://issues.apache.org/jira/browse/FLINK-12297
 Project: Flink
  Issue Type: Bug
  Components: Library / CEP
Affects Versions: 1.8.0
Reporter: Dawid Wysakowicz
 Fix For: 1.9.0, 1.8.1


Right now we do not invoke closure cleaner on output tags. Therefore such code:

{code}
@Test
public void testFlatSelectSerialization() throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource elements = env.fromElements(1, 2, 3);
OutputTag outputTag = new OutputTag("AAA") {};
CEP.pattern(elements, Pattern.begin("A")).flatSelect(
outputTag,
new PatternFlatTimeoutFunction() {
@Override
public void timeout(
Map> pattern,
long timeoutTimestamp,
Collector out) throws 
Exception {

}
},
new PatternFlatSelectFunction() {
@Override
public void flatSelect(Map> pattern, Collector out) throws Exception {

}
}
);

env.execute();
}
{code}

will fail with {{The implementation of the PatternFlatSelectAdapter is not 
serializable. }} exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12249) Type equivalence check fails for Window Aggregates

2019-04-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12249:


 Summary: Type equivalence check fails for Window Aggregates
 Key: FLINK-12249
 URL: https://issues.apache.org/jira/browse/FLINK-12249
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Legacy Planner, Tests
Affects Versions: 1.9.0
Reporter: Dawid Wysakowicz


Creating Aggregate node fails in rules: {{LogicalWindowAggregateRule}} and 
{{ExtendedAggregateExtractProjectRule}} if the only grouping expression is a 
window and
we compute aggregation on NON NULLABLE field.

The root cause for that, is how return type inference strategies in calcite 
work and how we handle window aggregates. Take 
{{org.apache.calcite.sql.type.ReturnTypes#AGG_SUM}} as an example, based on 
{{groupCount}} it adjusts type nullability based on groupCount.

Though we pass a false information as we strip down window aggregation from 
groupSet (in {{LogicalWindowAggregateRule}}).

One can reproduce this problem also with a unit test like this:

{code}
@Test
  def testTumbleFunction2() = {
 
val innerQuery =
  """
|SELECT
| CASE a WHEN 1 THEN 1 ELSE 99 END AS correct,
| rowtime
|FROM MyTable
  """.stripMargin

val sql =
  "SELECT " +
"  SUM(correct) as cnt, " +
"  TUMBLE_START(rowtime, INTERVAL '15' MINUTE) as wStart " +
s"FROM ($innerQuery) " +
"GROUP BY TUMBLE(rowtime, INTERVAL '15' MINUTE)"
val expected = ""
streamUtil.verifySql(sql, expected)
  }
{code}

This causes e2e tests to fail: 
https://travis-ci.org/apache/flink/builds/521183361?utm_source=slack&utm_medium=notificationhttps://travis-ci.org/apache/flink/builds/521183361?utm_source=slack&utm_medium=notification



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12165) Add resolution rule that checks all unresolved expressions are resolved

2019-04-11 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12165:


 Summary: Add resolution rule that checks all unresolved 
expressions are resolved
 Key: FLINK-12165
 URL: https://issues.apache.org/jira/browse/FLINK-12165
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12108) Simplify splitting expressions into projections, aggregations & windowProperties

2019-04-03 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-12108:


 Summary: Simplify splitting expressions into projections, 
aggregations & windowProperties
 Key: FLINK-12108
 URL: https://issues.apache.org/jira/browse/FLINK-12108
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11884) Port Table to flink-api-java

2019-03-12 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-11884:


 Summary: Port Table to flink-api-java
 Key: FLINK-11884
 URL: https://issues.apache.org/jira/browse/FLINK-11884
 Project: Flink
  Issue Type: New Feature
  Components: API / Table SQL
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


This includes:
* uncoupling {{LogicalNode}} from {{RelBuilder}}
* uncoupling {{CatalogNode}} from {{RelDataType}}
* migrating {{Table}} to java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11823) TrySerializer#duplicate does not create a proper duplication

2019-03-05 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-11823:


 Summary: TrySerializer#duplicate does not create a proper 
duplication
 Key: FLINK-11823
 URL: https://issues.apache.org/jira/browse/FLINK-11823
 Project: Flink
  Issue Type: Bug
  Components: API / Type Serialization System
Affects Versions: 1.7.2
Reporter: Dawid Wysakowicz
 Fix For: 1.7.3, 1.8.0


In flink 1.7.x TrySerializer#duplicate does not duplicate elemSerializer and 
throwableSerializer, which additionally is a KryoSerializer and therefore 
should always be duplicated.

It was fixed in 1.8/master with 186b8df4155a4c171d71f1c806290bd94374416c



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11778) Check and update delivery guarantees in Flink docs

2019-02-28 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-11778:


 Summary: Check and update delivery guarantees in Flink docs
 Key: FLINK-11778
 URL: https://issues.apache.org/jira/browse/FLINK-11778
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.7.2, 1.6.4, 1.8.0
Reporter: Dawid Wysakowicz


We should check and update the page: 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/connectors/guarantees.html

At least section for kafka producer should be updated since starting from Kafka 
0.11 we provide exactly-once guarantees.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11191) Exception in code generation when ambiguous columns in MATCH_RECOGNIZE

2018-12-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-11191:


 Summary: Exception in code generation when ambiguous columns in 
MATCH_RECOGNIZE
 Key: FLINK-11191
 URL: https://issues.apache.org/jira/browse/FLINK-11191
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Affects Versions: 1.7.0, 1.8.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


Query:
{code}
SELECT *
FROM Ticker
MATCH_RECOGNIZE (
  PARTITION BY symbol, price
  ORDER BY proctime
  MEASURES
A.symbol AS symbol,
A.price AS price
  PATTERN (A)
  DEFINE
A AS symbol = 'a'
) AS T
{code}

throws a cryptic exception from the code generation stack that the output arity 
is wrong. We should add early validation and throw a meaningful exception. 

I've also created a calcite ticket to fix it on calcite's side: [CALCITE-2747]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11170) Support DISTINCT aggregates in MATCH_RECOGNIZE

2018-12-16 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-11170:


 Summary: Support DISTINCT aggregates in MATCH_RECOGNIZE
 Key: FLINK-11170
 URL: https://issues.apache.org/jira/browse/FLINK-11170
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.8.0
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11061) Add travis profile that would run on each commit with scala 2.12

2018-12-04 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-11061:


 Summary: Add travis profile that would run on each commit with 
scala 2.12
 Key: FLINK-11061
 URL: https://issues.apache.org/jira/browse/FLINK-11061
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.7.0, 1.8.0
Reporter: Dawid Wysakowicz
 Fix For: 1.8.0, 1.7.1


In flink 1.7.0 we introduced support for scala 2.12 therefore we should add a 
profile in travis that we check we do not break scala 2.12 compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11017) Time interval for window aggregations in SQL is wrongly translated if specified with YEAR_MONTH resolution

2018-11-28 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-11017:


 Summary: Time interval for window aggregations in SQL is wrongly 
translated if specified with YEAR_MONTH resolution
 Key: FLINK-11017
 URL: https://issues.apache.org/jira/browse/FLINK-11017
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Affects Versions: 1.6.2, 1.7.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.6.3, 1.8.0, 1.7.1


If a time interval was specified with {{YEAR TO MONTH}} resolution like e.g.:
{code}
SELECT * 
FROM Mytable
GROUP BY 
TUMBLE(rowtime, INTERVAL '1-2' YEAR TO MONTH)
{code}
it will be wrongly translated to 14 milliseconds window. We should allow for 
only DAY TO SECOND resolution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10895) TypeSerializerSnapshotMigrationITCase.testSavepoint test failed on travis

2018-11-15 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10895:


 Summary: TypeSerializerSnapshotMigrationITCase.testSavepoint test 
failed on travis
 Key: FLINK-10895
 URL: https://issues.apache.org/jira/browse/FLINK-10895
 Project: Flink
  Issue Type: Bug
  Components: Tests, Type Serialization System
Reporter: Dawid Wysakowicz
 Fix For: 1.7.0


Failed with:
{code}
testSavepoint[Migrate Savepoint / Backend: 
(1.4,rocksdb)](org.apache.flink.test.migration.TypeSerializerSnapshotMigrationITCase)
  Time elapsed: 0.753 sec  <<< FAILURE!
java.lang.AssertionError: Values should be different. Actual: FAILED
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failEquals(Assert.java:185)
at org.junit.Assert.assertNotEquals(Assert.java:161)
at org.junit.Assert.assertNotEquals(Assert.java:175)
at 
org.apache.flink.test.checkpointing.utils.SavepointMigrationTestBase.restoreAndExecute(SavepointMigrationTestBase.java:217)
at 
org.apache.flink.test.migration.TypeSerializerSnapshotMigrationITCase.testSavepoint(TypeSerializerSnapshotMigrationITCase.java:136)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10894) Resuming Externalized Checkpoint (file, async, scale down) end-to-end test failed on travis

2018-11-15 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10894:


 Summary: Resuming Externalized Checkpoint (file, async, scale 
down) end-to-end test failed on travis
 Key: FLINK-10894
 URL: https://issues.apache.org/jira/browse/FLINK-10894
 Project: Flink
  Issue Type: Bug
  Components: E2E Tests
Reporter: Dawid Wysakowicz
 Fix For: 1.7.0


Failed with exception:
{code}
Caused by: java.io.FileNotFoundException: Cannot find meta data file 
'_metadata' in directory 
'file:/home/travis/build/apache/flink/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-56737836089/externalized-chckpt-e2e-backend-dir/48e55f131162ca5ab1fa95a93c76344b/chk-9'.
 Please try to load the checkpoint/savepoint directly from the metadata file 
instead of the directory.
at 
org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpointPointer(AbstractFsCheckpointStorage.java:256)
at 
org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpoint(AbstractFsCheckpointStorage.java:109)
at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1100)
at 
org.apache.flink.runtime.jobmaster.JobMaster.tryRestoreExecutionGraphFromSavepoint(JobMaster.java:1234)
at 
org.apache.flink.runtime.jobmaster.JobMaster.createAndRestoreExecutionGraph(JobMaster.java:1158)
at 
org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:296)
at 
org.apache.flink.runtime.jobmaster.JobManagerRunner.(JobManagerRunner.java:157)
... 10 more
{code}

https://api.travis-ci.org/v3/job/454950372/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10893) Streaming File Sink s3 end-to-end test failed on travis

2018-11-15 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10893:


 Summary: Streaming File Sink s3 end-to-end test failed on travis
 Key: FLINK-10893
 URL: https://issues.apache.org/jira/browse/FLINK-10893
 Project: Flink
  Issue Type: Bug
  Components: E2E Tests
Reporter: Dawid Wysakowicz
 Fix For: 1.7.0


https://api.travis-ci.org/v3/job/454950368/log.txt

Test failed with exception:
{code}
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: 
The bucket is in this region: us-east-1. Please use this region to retry the 
request (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; 
Request ID: 7C554BA6EB1DDCAE; S3 Extended Request ID: 
hPd88Lpqbh1wPV83GN3YgGH3h3e9ct5pDvhoFrTI3JGipfrCoztr/bWvHCUE9l7D9fBTXJMkvus=), 
S3 Extended Request ID: 
hPd88Lpqbh1wPV83GN3YgGH3h3e9ct5pDvhoFrTI3JGipfrCoztr/bWvHCUE9l7D9fBTXJMkvus=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1695)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1350)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1101)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:758)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:732)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:714)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:674)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:656)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:520)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4443)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4390)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4384)
at 
com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:844)
at 
com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:818)
at 
org.apache.flink.streaming.tests.util.s3.S3UtilProgram.listByFullPathPrefix(S3UtilProgram.java:107)
at 
org.apache.flink.streaming.tests.util.s3.S3UtilProgram.deleteByFullPathPrefix(S3UtilProgram.java:164)
at 
org.apache.flink.streaming.tests.util.s3.S3UtilProgram.main(S3UtilProgram.java:97)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10809) Using DataStreamUtils.reinterpretAsKeyedStream produces corrupted keyed state after restore

2018-11-07 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10809:


 Summary: Using DataStreamUtils.reinterpretAsKeyedStream produces 
corrupted keyed state after restore
 Key: FLINK-10809
 URL: https://issues.apache.org/jira/browse/FLINK-10809
 Project: Flink
  Issue Type: Bug
  Components: DataStream API, State Backends, Checkpointing
Affects Versions: 1.7.0
Reporter: Dawid Wysakowicz


I've tried using {{DataStreamUtils.reinterpretAsKeyedStream}} for results of 
windowed aggregation:
{code}
DataStream>> eventStream4 = 
eventStream2.keyBy(Event::getKey)

.window(SlidingEventTimeWindows.of(Time.milliseconds(150 * 3), 
Time.milliseconds(150)))
.apply(new WindowFunction>, Integer, TimeWindow>() {
private static final long serialVersionUID = 
3166250579972849440L;

@Override
public void apply(
Integer key, TimeWindow window, 
Iterable input,
Collector>> 
out) throws Exception {

out.collect(Tuple2.of(key, 
StreamSupport.stream(input.spliterator(), false).collect(Collectors.toList(;
}
});

DataStreamUtils.reinterpretAsKeyedStream(eventStream4, events-> 
events.f0)
.flatMap(createSlidingWindowCheckMapper(pt))
.addSink(new PrintSinkFunction<>());
{code}

and then in the createSlidingWindowCheckMapper I verify that each event belongs 
to 3 consecutive windows, for which I keep contents of last window in 
ValueState. In a non-failure setup this check runs fine, but it misses few 
windows after restore at the beginning.

{code}
public class SlidingWindowCheckMapper extends 
RichFlatMapFunction>, String> {

private static final long serialVersionUID = -744070793650644485L;

/** This value state tracks previously seen events with the number of 
windows they appeared in. */
private transient ValueState>> 
previousWindow;

private final int slideFactor;

SlidingWindowCheckMapper(int slideFactor) {
this.slideFactor = slideFactor;
}

@Override
public void open(Configuration parameters) throws Exception {
ValueStateDescriptor>> 
previousWindowDescriptor =
new ValueStateDescriptor<>("previousWindow",
new ListTypeInfo<>(new 
TupleTypeInfo<>(TypeInformation.of(Event.class), BasicTypeInfo.INT_TYPE_INFO)));

previousWindow = 
getRuntimeContext().getState(previousWindowDescriptor);
}

@Override
public void flatMap(Tuple2> value, 
Collector out) throws Exception {
List> previousWindowValues = 
Optional.ofNullable(previousWindow.value()).orElseGet(
Collections::emptyList);

List newValues = value.f1;
newValues.stream().reduce(new BinaryOperator() {
@Override
public Event apply(Event event, Event event2) {
if (event2.getSequenceNumber() - 1 != 
event.getSequenceNumber()) {
out.collect("Alert: events in window 
out ouf order!");
}

return event2;
}
});

List> newWindow = new ArrayList<>();
for (Tuple2 windowValue : previousWindowValues) 
{
if (!newValues.contains(windowValue.f0)) {
out.collect(String.format("Alert: event %s did 
not belong to %d consecutive windows. Event seen so far %d times.Current 
window: %s",
windowValue.f0,
slideFactor,
windowValue.f1,
value.f1));
} else {
newValues.remove(windowValue.f0);
if (windowValue.f1 + 1 != slideFactor) {
newWindow.add(Tuple2.of(windowValue.f0, 
windowValue.f1 + 1));
}
}
}

newValues.forEach(e -> newWindow.add(Tuple2.of(e, 1)));

previousWindow.update(newWindow);
}
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10752) Result of AbstractYarnClusterDescriptor#validateClusterResources is ignored

2018-11-01 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10752:


 Summary: Result of 
AbstractYarnClusterDescriptor#validateClusterResources is ignored
 Key: FLINK-10752
 URL: https://issues.apache.org/jira/browse/FLINK-10752
 Project: Flink
  Issue Type: Bug
  Components: YARN
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.7.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10722) Document MATCH_RECOGNIZE

2018-10-30 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10722:


 Summary: Document MATCH_RECOGNIZE
 Key: FLINK-10722
 URL: https://issues.apache.org/jira/browse/FLINK-10722
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Documentation, Table API & SQL
Affects Versions: 1.7.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10715) E2e tests fail with ConcurrentModificationException in MetricRegistryImpl

2018-10-29 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10715:


 Summary: E2e tests fail with ConcurrentModificationException in 
MetricRegistryImpl
 Key: FLINK-10715
 URL: https://issues.apache.org/jira/browse/FLINK-10715
 Project: Flink
  Issue Type: Bug
  Components: E2E Tests, Metrics
Affects Versions: 1.7.0
Reporter: Dawid Wysakowicz


Couple of e2e tests that rely on metrics fail with exception:

{code}
2018-10-29 11:40:32,781 WARN  
org.apache.flink.runtime.metrics.MetricRegistryImpl   - Error while 
reporting metrics
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)
at java.util.HashMap$EntryIterator.next(HashMap.java:1471)
at java.util.HashMap$EntryIterator.next(HashMap.java:1469)
at 
org.apache.flink.metrics.slf4j.Slf4jReporter.report(Slf4jReporter.java:101)
at 
org.apache.flink.runtime.metrics.MetricRegistryImpl$ReporterTask.run(MetricRegistryImpl.java:427)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}

Tests that failed:
*  'Resuming Externalized Checkpoint (file, sync, no parallelism change) 
end-to-end test
* 'State TTL Heap backend end-to-end test'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10678) Add a switch to run_test to configure if logs should be checked for errors/excepions

2018-10-25 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10678:


 Summary: Add a switch to run_test to configure if logs should be 
checked for errors/excepions
 Key: FLINK-10678
 URL: https://issues.apache.org/jira/browse/FLINK-10678
 Project: Flink
  Issue Type: Bug
  Components: E2E Tests
Affects Versions: 1.7.0
Reporter: Dawid Wysakowicz
 Fix For: 1.7.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10675) Fix dependency issues in sql & table integration

2018-10-25 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10675:


 Summary: Fix dependency issues in sql & table integration
 Key: FLINK-10675
 URL: https://issues.apache.org/jira/browse/FLINK-10675
 Project: Flink
  Issue Type: Bug
  Components: CEP, SQL Client, Table API & SQL
Affects Versions: 1.7.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.7.0


There are two issues with dependencies:
* check for cep dependency in {{DataStreamMatchRule}} should use thread 
classloader
* we should add cep as dependency to sql-client



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10669) Exceptions & errors are not properly checked in logs

2018-10-24 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10669:


 Summary: Exceptions & errors are not properly checked in logs
 Key: FLINK-10669
 URL: https://issues.apache.org/jira/browse/FLINK-10669
 Project: Flink
  Issue Type: Bug
  Components: E2E Tests
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.7.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10597) Enable UDFs support in MATCH_RECOGNIZE

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10597:


 Summary: Enable UDFs support in MATCH_RECOGNIZE
 Key: FLINK-10597
 URL: https://issues.apache.org/jira/browse/FLINK-10597
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Table API & SQL
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10596) Add access to timerService in IterativeCondition and Pattern(Flat)SelectFunction

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10596:


 Summary: Add access to timerService in IterativeCondition and 
Pattern(Flat)SelectFunction
 Key: FLINK-10596
 URL: https://issues.apache.org/jira/browse/FLINK-10596
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Table API & SQL
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10595) Support patterns that can produce empty matches

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10595:


 Summary: Support patterns that can produce empty matches
 Key: FLINK-10595
 URL: https://issues.apache.org/jira/browse/FLINK-10595
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Table API & SQL
Reporter: Dawid Wysakowicz


CEP library does not emit empty matches. In order to be SQL standard compliant 
we explicitly forbid patterns that can produce empty matches(patterns that have 
only optional states) e.g A*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10594) Support SUBSETS

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10594:


 Summary: Support SUBSETS
 Key: FLINK-10594
 URL: https://issues.apache.org/jira/browse/FLINK-10594
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Table API & SQL
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10593) Add support for ALL ROWS PER MATCH

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10593:


 Summary: Add support for ALL ROWS PER MATCH
 Key: FLINK-10593
 URL: https://issues.apache.org/jira/browse/FLINK-10593
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Table API & SQL
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10592) Support exclusions in patterns

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10592:


 Summary: Support exclusions in patterns
 Key: FLINK-10592
 URL: https://issues.apache.org/jira/browse/FLINK-10592
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Table API & SQL
Reporter: Dawid Wysakowicz


We should support exclusions like {{(A {-B-} C)}} which means that rows will be 
matched to B, but will not participate in the output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10591) Add functions to return TimeIndicators from MATCH_RECOGNIZE

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10591:


 Summary: Add functions to return TimeIndicators from 
MATCH_RECOGNIZE
 Key: FLINK-10591
 URL: https://issues.apache.org/jira/browse/FLINK-10591
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Table API & SQL
Affects Versions: 1.7.0
Reporter: Dawid Wysakowicz


In order to be able to apply windowing on top of results from MATCH_RECOGNIZE 
clause we have to provide a way to return proper TimeIndicator. We cannot 
simply return rowtime/proctime for any row of the match, cause match can be 
finalized e.g. by the first non matching row (in case of greedy), so the 
TimeIndicator will be equal to the timestamp of that non-matching row.

The suggestion is to provide functions:
* MATCH_ROWTIME()
* MATCH_PROCTIME()
that will output rowtime/proctime indicators.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10590) Optional quantifier is discarded from first element of group pattern

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10590:


 Summary: Optional quantifier is discarded from first element of 
group pattern
 Key: FLINK-10590
 URL: https://issues.apache.org/jira/browse/FLINK-10590
 Project: Flink
  Issue Type: Bug
  Components: CEP, Table API & SQL
Reporter: Dawid Wysakowicz


In a pattern {{(A? B)+}}, the optional property of {{A}} state is effectively 
discarded in the compiled state machine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10589) Group patterns should support greedy quantifiers

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10589:


 Summary: Group patterns should support greedy quantifiers
 Key: FLINK-10589
 URL: https://issues.apache.org/jira/browse/FLINK-10589
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Table API & SQL
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10588) Support reusing same variable in Pattern

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10588:


 Summary: Support reusing same variable in Pattern
 Key: FLINK-10588
 URL: https://issues.apache.org/jira/browse/FLINK-10588
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Table API & SQL
Affects Versions: 1.7.0
Reporter: Dawid Wysakowicz


One should be able to use the same variable multiple times in a pattern. So we 
should allow patterns like (A B A).

CEP library enforces unique names for patterns. A simple workaround to generate 
artificial unique names won't work in this case because having multiple 
patterns with same name puts additional constraints on AFTER MATCH SKIP as it 
skips to LAST/FIRST element of elements assigned to both instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10587) Improve support of greedy/reluctant quantifiers

2018-10-18 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10587:


 Summary: Improve support of greedy/reluctant quantifiers
 Key: FLINK-10587
 URL: https://issues.apache.org/jira/browse/FLINK-10587
 Project: Flink
  Issue Type: Sub-task
  Components: CEP, Table API & SQL
Affects Versions: 1.7.0
Reporter: Dawid Wysakowicz


Right now there are limitations where greedy/reluctant quantifier can be 
applied:
* greedy cannot be applied for the last variable of a pattern
* optional variable cannot be reluctant

We should improve support of those cases in CEP library and enable it in SQL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10470) Add method to check if pattern can produce empty matches

2018-10-01 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10470:


 Summary: Add method to check if pattern can produce empty matches
 Key: FLINK-10470
 URL: https://issues.apache.org/jira/browse/FLINK-10470
 Project: Flink
  Issue Type: Sub-task
  Components: CEP
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10421) Shaded Hadoop S3A end-to-end test failed on Travis

2018-09-25 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10421:


 Summary: Shaded Hadoop S3A end-to-end test failed on Travis
 Key: FLINK-10421
 URL: https://issues.apache.org/jira/browse/FLINK-10421
 Project: Flink
  Issue Type: Bug
  Components: Tests
Reporter: Dawid Wysakowicz


https://api.travis-ci.org/v3/job/432916761/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10417) Add option to throw exception on pattern variable miss with SKIP_TO_FIRST/LAST

2018-09-25 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10417:


 Summary: Add option to throw exception on pattern variable miss 
with SKIP_TO_FIRST/LAST
 Key: FLINK-10417
 URL: https://issues.apache.org/jira/browse/FLINK-10417
 Project: Flink
  Issue Type: Improvement
  Components: CEP
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.7.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10416) Add to rat excludes files generated by jepsen tests

2018-09-25 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10416:


 Summary: Add to rat excludes files generated by jepsen tests
 Key: FLINK-10416
 URL: https://issues.apache.org/jira/browse/FLINK-10416
 Project: Flink
  Issue Type: Bug
  Components: Build System, Tests
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.7.0


Currently jepsen generates some files that results in rat plugin failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10414) Add skip to next strategy

2018-09-25 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10414:


 Summary: Add skip to next strategy
 Key: FLINK-10414
 URL: https://issues.apache.org/jira/browse/FLINK-10414
 Project: Flink
  Issue Type: Improvement
  Components: CEP
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.7.0


Add skip to next strategy, that should discard all partial matches that started 
with the same element as found match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10409) Collection data sink does not propagate exceptions

2018-09-24 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10409:


 Summary: Collection data sink does not propagate exceptions
 Key: FLINK-10409
 URL: https://issues.apache.org/jira/browse/FLINK-10409
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Reporter: Dawid Wysakowicz


I would assume that this test should fail with {{RuntimeException}}, but it 
actually runs just fine.

{code}
@Test
public void testA() throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

List resultList = new ArrayList<>();
SingleOutputStreamOperator result = 
env.fromElements("A").map(obj -> {
throw new RuntimeException();
});



DataStreamUtils.collect(result).forEachRemaining(resultList::add);

}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10370) DistributedCache does not work in job cluster mode.

2018-09-19 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10370:


 Summary: DistributedCache does not work in job cluster mode.
 Key: FLINK-10370
 URL: https://issues.apache.org/jira/browse/FLINK-10370
 Project: Flink
  Issue Type: Bug
  Components: Job-Submission
Affects Versions: 1.6.0
Reporter: Dawid Wysakowicz


When using job cluster mode the client does not follow a standard submission 
path during which {{DistributedCacheEntries}} are written into 
{{Configuration}}. Therefore the files cannot be accessed in the job.

How to reproduce:
Simple job that uses {{DistributedCache}}:
{code}
public class DistributedCacheViaDfsTestProgram {

public static void main(String[] args) throws Exception {

final ParameterTool params = ParameterTool.fromArgs(args);

final String inputFile = 
"hdfs://172.17.0.2:8020/home/hadoop-user/in";
final String outputFile = "/tmp/out";

final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);

env.registerCachedFile(inputFile, "test_data", false);

env.fromElements(1)
.map(new TestMapFunction())
.writeAsText(outputFile, 
FileSystem.WriteMode.OVERWRITE);

env.execute("Distributed Cache Via Blob Test Program");
}

static class TestMapFunction extends RichMapFunction {

@Override
public String map(Integer value) throws Exception {
final Path testFile = 
getRuntimeContext().getDistributedCache().getFile("test_data").toPath();

return Files.readAllLines(testFile)
.stream()
.collect(Collectors.joining("\n"));
}
}
}
{code}

If one runs this program e.g. in yarn job cluster mode this will produce:
{code}
java.lang.IllegalArgumentException: File with name 'test_data' is not 
available. Did you forget to register the file?
at 
org.apache.flink.api.common.cache.DistributedCache.getFile(DistributedCache.java:110)
at 
org.apache.flink.streaming.tests.DistributedCacheViaDfsTestProgram$TestMapFunction.map(DistributedCacheViaDfsTestProgram.java:59)
at 
org.apache.flink.streaming.tests.DistributedCacheViaDfsTestProgram$TestMapFunction.map(DistributedCacheViaDfsTestProgram.java:55)
at 
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at 
org.apache.flink.streaming.api.functions.source.FromElementsFunction.run(FromElementsFunction.java:164)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
{code}

This job will run fine though, if it will be submitted to yarn-session cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10354) Savepoints should be counted as retained checkpoints

2018-09-17 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10354:


 Summary: Savepoints should be counted as retained checkpoints
 Key: FLINK-10354
 URL: https://issues.apache.org/jira/browse/FLINK-10354
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.6.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.7.0


This task is about reverting [FLINK-6328].

The problem is that you can get incorrect results with exactly-once sinks if 
there is a failure after taking a savepoint but before taking the next 
checkpoint because the savepoint will also have manifested side effects to the 
sink.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10199) StandaloneSessionClusterEntrypoint does not respect host/ui-port given from jobmanager.sh

2018-08-22 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10199:


 Summary: StandaloneSessionClusterEntrypoint does not respect 
host/ui-port given from jobmanager.sh
 Key: FLINK-10199
 URL: https://issues.apache.org/jira/browse/FLINK-10199
 Project: Flink
  Issue Type: Bug
  Components: Startup Shell Scripts
Affects Versions: 1.6.0, 1.5.0
Reporter: Dawid Wysakowicz


The parameters {{--host}} and {{--webui-port}} provided for jobmanager.sh 
script are ignored in standalonesession. You cannot start e.g HA setup via 
{{start-cluster.sh}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10166) Dependency problems when executing SQL query in sql-client

2018-08-17 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10166:


 Summary: Dependency problems when executing SQL query in sql-client
 Key: FLINK-10166
 URL: https://issues.apache.org/jira/browse/FLINK-10166
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Affects Versions: 1.6.0
Reporter: Dawid Wysakowicz


When tried to run query:
{{select count(distinct name) from (Values ('a'), ('b')) AS NameTable(name)}}
in `sql-client.sh` I got:
{code}
[ERROR] Could not execute SQL statement. Reason:
org.codehaus.commons.compiler.CompileException: Line 43, Column 10: Unknown 
variable or type "org.apache.commons.codec.binary.Base64"
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10113) Drop support for pre 1.6 shared buffer state

2018-08-09 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-10113:


 Summary: Drop support for pre 1.6 shared buffer state
 Key: FLINK-10113
 URL: https://issues.apache.org/jira/browse/FLINK-10113
 Project: Flink
  Issue Type: Improvement
  Components: CEP
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.7.0


We could drop migration code that transforms old pre 1.6 state to 1.6 state.
This will leave possibility to migrate from 1.5 to 1.7 via 1.6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9871) Use Description class for ConfigOptions with rich formatting

2018-07-17 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9871:
---

 Summary: Use Description class for ConfigOptions with rich 
formatting
 Key: FLINK-9871
 URL: https://issues.apache.org/jira/browse/FLINK-9871
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.6.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.6.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9866) Allow passing program arguments to StandaloneJobCluster

2018-07-16 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9866:
---

 Summary: Allow passing program arguments to StandaloneJobCluster
 Key: FLINK-9866
 URL: https://issues.apache.org/jira/browse/FLINK-9866
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.6.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


Right now always an empty array is passed as arguments to 
{{StandaloneJobClusterEntryPoint}}. Should pass the parsed arguments. Also we 
should extend run and docker scripts to allow passing arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9815) YARNSessionCapacitySchedulerITCase flaky

2018-07-11 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9815:
---

 Summary: YARNSessionCapacitySchedulerITCase flaky
 Key: FLINK-9815
 URL: https://issues.apache.org/jira/browse/FLINK-9815
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.6.0
Reporter: Dawid Wysakowicz


The test fails because of dangling yarn applications.

Logs: https://api.travis-ci.org/v3/job/402657694/log.txt

It was also reported previously in [FLINK-8161]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9792) Cannot add html tags in options description

2018-07-10 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9792:
---

 Summary: Cannot add html tags in options description
 Key: FLINK-9792
 URL: https://issues.apache.org/jira/browse/FLINK-9792
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.5.1, 1.6.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


Right now it is impossible to add any html tags in options description, because 
all "<" and ">" are escaped. Therefore some links there do not work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9791) Outdated savepoint compatibility table

2018-07-10 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9791:
---

 Summary: Outdated savepoint compatibility table
 Key: FLINK-9791
 URL: https://issues.apache.org/jira/browse/FLINK-9791
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.4.2, 1.5.1, 1.6.0
Reporter: Dawid Wysakowicz


Savepoint compatibility table is outdated, does no cover 1.4.x nor 1.5.x. We 
should either update it or remove it, as I think we agreed to support only two 
versions backward compatibility and such table is unnecessary.

 

You can check the table in version 1.5.x here: 
https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/upgrading.html#compatibility-table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9773) Break down CEP documentation into subsections

2018-07-06 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9773:
---

 Summary: Break down CEP documentation into subsections
 Key: FLINK-9773
 URL: https://issues.apache.org/jira/browse/FLINK-9773
 Project: Flink
  Issue Type: Improvement
  Components: CEP, Documentation
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9760) Return a single element from extractPatterns

2018-07-05 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9760:
---

 Summary: Return a single element from extractPatterns
 Key: FLINK-9760
 URL: https://issues.apache.org/jira/browse/FLINK-9760
 Project: Flink
  Issue Type: Improvement
  Components: CEP
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


Right now {{SharedBuffer#extractPatterns}} allows to extract multiple matches 
for the same ComputationState, but our NFA does not allow creating such. Thus 
we should optimize this method having in mind the method can only return one 
match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9593) Unify AfterMatch semantics with SQL MATCH_RECOGNIZE

2018-06-15 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9593:
---

 Summary: Unify AfterMatch semantics with SQL MATCH_RECOGNIZE
 Key: FLINK-9593
 URL: https://issues.apache.org/jira/browse/FLINK-9593
 Project: Flink
  Issue Type: Improvement
  Components: CEP
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9576) Wrong contiguity documentation

2018-06-13 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9576:
---

 Summary: Wrong contiguity documentation
 Key: FLINK-9576
 URL: https://issues.apache.org/jira/browse/FLINK-9576
 Project: Flink
  Issue Type: Bug
  Components: CEP, Documentation
Reporter: Dawid Wysakowicz


Example for the contiguity is first of all wrong, and second of all misleading:

 
{code:java}
To illustrate the above with an example, a pattern sequence "a+ b" (one or more 
"a"’s followed by a "b") with input "a1", "c", "a2", "b" will have the 
following results:
Strict Contiguity: {a2 b} – the "c" after "a1" causes "a1" to be discarded.
Relaxed Contiguity: {a1 b} and {a1 a2 b} – "c" is ignored.
Non-Deterministic Relaxed Contiguity: {a1 b}, {a2 b}, and {a1 a2 b}.
For looping patterns (e.g. oneOrMore() and times()) the default is relaxed 
contiguity. If you want strict contiguity, you have to explicitly specify it by 
using the consecutive() call, and if you want non-deterministic relaxed 
contiguity you can use the allowCombinations() call.
{code}
 

Results for the relaxed contiguity are wrong plus they do not clearly explains 
the internal contiguity of kleene closure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9538) Make KeyedStateFunction an interface

2018-06-06 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9538:
---

 Summary: Make KeyedStateFunction an interface
 Key: FLINK-9538
 URL: https://issues.apache.org/jira/browse/FLINK-9538
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz


I propose to change the KeyedStateFunction from abstract class to interface 
(FunctionalInterface in particular) to enable passing lambdas.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9469) Add tests that cover PatternStream#flatSelect

2018-05-29 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9469:
---

 Summary: Add tests that cover PatternStream#flatSelect
 Key: FLINK-9469
 URL: https://issues.apache.org/jira/browse/FLINK-9469
 Project: Flink
  Issue Type: Improvement
  Components: CEP
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9438) Add documentation for (Registry)AvroDeserializationSchema

2018-05-25 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9438:
---

 Summary: Add documentation for (Registry)AvroDeserializationSchema
 Key: FLINK-9438
 URL: https://issues.apache.org/jira/browse/FLINK-9438
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9418) Migrate SharedBuffer to use MapState

2018-05-23 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9418:
---

 Summary: Migrate SharedBuffer to use MapState
 Key: FLINK-9418
 URL: https://issues.apache.org/jira/browse/FLINK-9418
 Project: Flink
  Issue Type: Improvement
  Components: CEP
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.6.0


Right now {{SharedBuffer}} is implemented with java Collections and the whole 
buffer is deserialized on each access. We should migrate it to MapState, so 
that only the necessary parts (e.g. tail entries) are deserialized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9337) Implement AvroDeserializationSchema

2018-05-11 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9337:
---

 Summary: Implement AvroDeserializationSchema
 Key: FLINK-9337
 URL: https://issues.apache.org/jira/browse/FLINK-9337
 Project: Flink
  Issue Type: New Feature
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9338) Implement RegistryAvroDeserializationSchema

2018-05-11 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9338:
---

 Summary: Implement RegistryAvroDeserializationSchema
 Key: FLINK-9338
 URL: https://issues.apache.org/jira/browse/FLINK-9338
 Project: Flink
  Issue Type: New Feature
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8620) Enable shipping custom artifacts to BlobStore and accessing them through DistributedCache

2018-02-09 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-8620:
---

 Summary: Enable shipping custom artifacts to BlobStore and 
accessing them through DistributedCache
 Key: FLINK-8620
 URL: https://issues.apache.org/jira/browse/FLINK-8620
 Project: Flink
  Issue Type: New Feature
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


We should be able to distribute custom files to taskmanagers. To do that we can 
store those files in BlobStore and later on access them in TaskManagers through 
DistributedCache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-7511) Remove dead code after dropping backward compatibility with <=1.2

2017-08-25 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7511:
---

 Summary: Remove dead code after dropping backward compatibility 
with <=1.2
 Key: FLINK-7511
 URL: https://issues.apache.org/jira/browse/FLINK-7511
 Project: Flink
  Issue Type: Improvement
  Components: CEP
Affects Versions: 1.4.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7386) Flink Elasticsearch 5 connector is not compatible with Elasticsearch 5.2+ client

2017-08-08 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7386:
---

 Summary: Flink Elasticsearch 5 connector is not compatible with 
Elasticsearch 5.2+ client
 Key: FLINK-7386
 URL: https://issues.apache.org/jira/browse/FLINK-7386
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz


In Elasticsearch 5.2.0 client the class {{BulkProcessor}} was refactored and 
has no longer the method {{add(ActionRequest)}}.

For more info see: https://github.com/elastic/elasticsearch/pull/20109



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7280) Wrong clearing SharedBuffer of Equal elements with same Timestamp

2017-07-27 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7280:
---

 Summary: Wrong clearing SharedBuffer of Equal elements with same 
Timestamp
 Key: FLINK-7280
 URL: https://issues.apache.org/jira/browse/FLINK-7280
 Project: Flink
  Issue Type: Bug
  Components: CEP
Affects Versions: 1.3.1, 1.4.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


Following tests fails right now:

{code}
@Test
public void testClearingBuffer() throws Exception {
List> inputEvents = new ArrayList<>();

Event a1 = new Event(40, "a", 1.0);
Event b1 = new Event(41, "b", 2.0);
Event c1 = new Event(41, "c", 2.0);
Event d = new Event(41, "d", 2.0);

inputEvents.add(new StreamRecord<>(a1, 1));
inputEvents.add(new StreamRecord<>(b1, 2));
inputEvents.add(new StreamRecord<>(c1, 2));
inputEvents.add(new StreamRecord<>(d, 2));

Pattern pattern = Pattern.begin("a").where(new 
SimpleCondition() {
@Override
public boolean filter(Event value) throws Exception {
return value.getName().equals("a");
}
}).followedBy("b").where(new SimpleCondition() {
@Override
public boolean filter(Event value) throws Exception {
return value.getName().equals("b");
}
}).followedBy("c").where(new SimpleCondition() {
@Override
public boolean filter(Event value) throws Exception {
return value.getName().equals("c");
}
}).followedBy("d").where(new SimpleCondition() {
@Override
public boolean filter(Event value) throws Exception {
return value.getName().equals("d");
}
});

NFA nfa = NFACompiler.compile(pattern, 
Event.createTypeSerializer(), false);

List> resultingPatterns = feedNFA(inputEvents, nfa);
compareMaps(resultingPatterns, Lists.>newArrayList(
Lists.newArrayList(a1, b1, c1, d)
));
assertTrue(nfa.isEmpty());
}
{code}

{code}
@Test
public void testClearingBufferWithUntilAtTheEnd() throws Exception {
List> inputEvents = new ArrayList<>();

Event a1 = new Event(40, "a", 1.0);
Event d1 = new Event(41, "d", 2.0);
Event d2 = new Event(41, "d", 2.0);
Event d3 = new Event(41, "d", 2.0);
Event d4 = new Event(41, "d", 2.0);

inputEvents.add(new StreamRecord<>(a1, 1));
inputEvents.add(new StreamRecord<>(d1, 2));
inputEvents.add(new StreamRecord<>(d2, 2));
inputEvents.add(new StreamRecord<>(d3, 2));
inputEvents.add(new StreamRecord<>(d4, 4));

Pattern pattern = Pattern.begin("a").where(new 
SimpleCondition() {
@Override
public boolean filter(Event value) throws Exception {
return value.getName().equals("a");
}
}).followedBy("d").where(new SimpleCondition() {
@Override
public boolean filter(Event value) throws Exception {
return value.getName().equals("d");
}
}).oneOrMore().until(new IterativeCondition() {
@Override
public boolean filter(Event value, Context ctx) throws 
Exception {
return 
Iterators.size(ctx.getEventsForPattern("d").iterator()) == 3;
}
});

NFA nfa = NFACompiler.compile(pattern, 
Event.createTypeSerializer(), false);

List> resultingPatterns = feedNFA(inputEvents, nfa);
compareMaps(resultingPatterns, Lists.>newArrayList(
Lists.newArrayList(a1, d1, d2, d3),
Lists.newArrayList(a1, d1, d2),
Lists.newArrayList(a1, d1)
));
assertTrue(nfa.isEmpty());
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-15 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7202:
---

 Summary: Split supressions for flink-core, flink-java, 
flink-optimizer per package
 Key: FLINK-7202
 URL: https://issues.apache.org/jira/browse/FLINK-7202
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7192) Activate checkstyle flink-java/test/operator

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7192:
---

 Summary: Activate checkstyle flink-java/test/operator
 Key: FLINK-7192
 URL: https://issues.apache.org/jira/browse/FLINK-7192
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7191) Activate checkstyle flink-java/operators/translation

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7191:
---

 Summary: Activate checkstyle flink-java/operators/translation
 Key: FLINK-7191
 URL: https://issues.apache.org/jira/browse/FLINK-7191
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7190) Activate checkstyle flink-java/*

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7190:
---

 Summary: Activate checkstyle flink-java/*
 Key: FLINK-7190
 URL: https://issues.apache.org/jira/browse/FLINK-7190
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7189) Activate checkstyle flink-java/utils

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7189:
---

 Summary: Activate checkstyle flink-java/utils
 Key: FLINK-7189
 URL: https://issues.apache.org/jira/browse/FLINK-7189
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7187) Activate checkstyle flink-java/sca

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7187:
---

 Summary: Activate checkstyle flink-java/sca
 Key: FLINK-7187
 URL: https://issues.apache.org/jira/browse/FLINK-7187
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7188) Activate checkstyle flink-java/summarize

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7188:
---

 Summary: Activate checkstyle flink-java/summarize
 Key: FLINK-7188
 URL: https://issues.apache.org/jira/browse/FLINK-7188
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7186) Activate checkstyle flink-java/sampling

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7186:
---

 Summary: Activate checkstyle flink-java/sampling
 Key: FLINK-7186
 URL: https://issues.apache.org/jira/browse/FLINK-7186
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7185) Activate checkstyle flink-java/io

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7185:
---

 Summary: Activate checkstyle flink-java/io
 Key: FLINK-7185
 URL: https://issues.apache.org/jira/browse/FLINK-7185
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7183) Activate checkstyle flink-java/aggregation

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7183:
---

 Summary: Activate checkstyle flink-java/aggregation
 Key: FLINK-7183
 URL: https://issues.apache.org/jira/browse/FLINK-7183
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7184) Activate checkstyle flink-java/hadoop

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7184:
---

 Summary: Activate checkstyle flink-java/hadoop
 Key: FLINK-7184
 URL: https://issues.apache.org/jira/browse/FLINK-7184
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7182) Activate checkstyle flink-java/functions

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7182:
---

 Summary: Activate checkstyle flink-java/functions
 Key: FLINK-7182
 URL: https://issues.apache.org/jira/browse/FLINK-7182
 Project: Flink
  Issue Type: Improvement
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7181) Activate checkstyle flink-java/operators

2017-07-14 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7181:
---

 Summary: Activate checkstyle flink-java/operators
 Key: FLINK-7181
 URL: https://issues.apache.org/jira/browse/FLINK-7181
 Project: Flink
  Issue Type: Bug
  Components: Java API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7130) Remove eventSerializer from NFA

2017-07-07 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7130:
---

 Summary: Remove eventSerializer from NFA
 Key: FLINK-7130
 URL: https://issues.apache.org/jira/browse/FLINK-7130
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


Right now eventSerializer is serialized within NFA. It should be present only 
in NFASerializer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7129) Dynamically changing patterns

2017-07-07 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-7129:
---

 Summary: Dynamically changing patterns
 Key: FLINK-7129
 URL: https://issues.apache.org/jira/browse/FLINK-7129
 Project: Flink
  Issue Type: New Feature
  Components: CEP
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


An umbrella task for introducing mechanism for injecting patterns through 
coStream



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6983) Do not serialize {{State}}s with {{NFA}}

2017-06-22 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6983:
---

 Summary: Do not serialize {{State}}s with {{NFA}}
 Key: FLINK-6983
 URL: https://issues.apache.org/jira/browse/FLINK-6983
 Project: Flink
  Issue Type: Improvement
  Components: CEP
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6943) Improve exceptions within TypeExtractionUtils#getSingleAbstractMethod

2017-06-19 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6943:
---

 Summary: Improve exceptions within 
TypeExtractionUtils#getSingleAbstractMethod
 Key: FLINK-6943
 URL: https://issues.apache.org/jira/browse/FLINK-6943
 Project: Flink
  Issue Type: Bug
  Components: Core
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


Exception message seems to be inexact. 
Also if there is no SAM, sam would be null upon returning from the method.

The suggestion from a review was to change the message and add a check (for 
null sam) prior to returning.

Another suggestion is to check if the given method is an interface, as only for 
interface it is possible to pass lambda.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6884) Incompatible TableSource#getReturnType and actual input type should fail faster

2017-06-10 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6884:
---

 Summary: Incompatible TableSource#getReturnType and actual input 
type should fail faster
 Key: FLINK-6884
 URL: https://issues.apache.org/jira/browse/FLINK-6884
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Affects Versions: 1.3.0
Reporter: Dawid Wysakowicz


When a {{TableSource#getReturnType}} differs from the actual input type the 
exception is thrown in code generation step with an exception that is hard to 
track back:

{code}
org.apache.flink.table.codegen.CodeGenException: Arity of result type does not 
match number of expressions.
at 
org.apache.flink.table.codegen.CodeGenerator.generateResultExpression(CodeGenerator.scala:940)
at 
org.apache.flink.table.codegen.CodeGenerator.generateConverterResultExpression(CodeGenerator.scala:883)
at 
org.apache.flink.table.plan.nodes.CommonScan$class.generatedConversionFunction(CommonScan.scala:57)
at 
org.apache.flink.table.plan.nodes.datastream.StreamTableSourceScan.generatedConversionFunction(StreamTableSourceScan.scala:35)
at 
org.apache.flink.table.plan.nodes.datastream.StreamScan$class.convertToInternalRow(StreamScan.scala:48)
at 
org.apache.flink.table.plan.nodes.datastream.StreamTableSourceScan.convertToInternalRow(StreamTableSourceScan.scala:35)
at 
org.apache.flink.table.plan.nodes.datastream.StreamTableSourceScan.translateToPlan(StreamTableSourceScan.scala:107)
{code}

It would be nice if more meaningful exception was thrown earlier.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6854) flink-connector-elasticsearch5_2.10 artifact is not published to mvnrepository

2017-06-06 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6854:
---

 Summary: flink-connector-elasticsearch5_2.10 artifact is not 
published to mvnrepository
 Key: FLINK-6854
 URL: https://issues.apache.org/jira/browse/FLINK-6854
 Project: Flink
  Issue Type: Bug
  Components: Build System, ElasticSearch Connector
Affects Versions: 1.3.0
Reporter: Dawid Wysakowicz
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6783) Wrongly extracted TypeInformations for WindowedStream::aggregate

2017-05-31 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6783:
---

 Summary: Wrongly extracted TypeInformations for 
WindowedStream::aggregate
 Key: FLINK-6783
 URL: https://issues.apache.org/jira/browse/FLINK-6783
 Project: Flink
  Issue Type: Bug
  Components: DataStream API, Type Serialization System
Affects Versions: 1.3.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


The following test fails because of wrongly acquired output type for 
{{AggregateFunction}}:

{code}
@Test
public void testAggregateWithWindowFunctionDifferentResultTypes() throws 
Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

DataStream> source = 
env.fromElements(Tuple2.of("hello", 1), Tuple2.of("hello", 2));

DataStream> window = source
.keyBy(new TupleKeySelector())
.window(TumblingEventTimeWindows.of(Time.of(1, 
TimeUnit.SECONDS)))
.aggregate(new AggregateFunction, 
Tuple2, String>() {
@Override
public Tuple2 createAccumulator() {
return Tuple2.of("", 0);
}

@Override
public void add(
Tuple2 value, Tuple2 accumulator) {

}

@Override
public String getResult(Tuple2 
accumulator) {
return accumulator.f0;
}

@Override
public Tuple2 merge(
Tuple2 a, Tuple2 b) {
return Tuple2.of("", 0);
}
}, new WindowFunction, 
String, TimeWindow>() {
@Override
public void apply(
String s,
TimeWindow window,
Iterable input,
Collector> out) 
throws Exception {
out.collect(Tuple3.of("", "", 0));
}
});

OneInputTransformation, Tuple3> transform =
(OneInputTransformation, Tuple3>) window.getTransformation();

OneInputStreamOperator, Tuple3> operator = transform.getOperator();

Assert.assertTrue(operator instanceof WindowOperator);
WindowOperator, ?, ?, ?> winOperator =
(WindowOperator, ?, ?, ?>) 
operator;

Assert.assertTrue(winOperator.getTrigger() instanceof EventTimeTrigger);
Assert.assertTrue(winOperator.getWindowAssigner() instanceof 
TumblingEventTimeWindows);
Assert.assertTrue(winOperator.getStateDescriptor() instanceof 
AggregatingStateDescriptor);

processElementAndEnsureOutput(
operator, winOperator.getKeySelector(), 
BasicTypeInfo.STRING_TYPE_INFO, new Tuple2<>("hello", 1));
}
{code}

The test results in 
{code}
org.apache.flink.api.common.functions.InvalidTypesException: Input mismatch: 
Tuple type expected.

at 
org.apache.flink.api.java.typeutils.TypeExtractor.validateInputType(TypeExtractor.java:1157)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:451)
at 
org.apache.flink.streaming.api.datastream.WindowedStream.aggregate(WindowedStream.java:855)
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowTranslationTest.testAggregateWithWindowFunctionDifferentResultTypes(WindowTranslationTest.java:702)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunn

[jira] [Created] (FLINK-6734) Exclude org.apache.flink.api.java.tuple from checkstyle AvoidStarImport

2017-05-26 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6734:
---

 Summary: Exclude org.apache.flink.api.java.tuple from checkstyle 
AvoidStarImport
 Key: FLINK-6734
 URL: https://issues.apache.org/jira/browse/FLINK-6734
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6733) Remove commented out AvgAggregationFunction.java

2017-05-26 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6733:
---

 Summary: Remove commented out AvgAggregationFunction.java
 Key: FLINK-6733
 URL: https://issues.apache.org/jira/browse/FLINK-6733
 Project: Flink
  Issue Type: Bug
  Components: Java API
Reporter: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6716) Suppress load errors in checkstyle JavadocMethod

2017-05-25 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6716:
---

 Summary: Suppress load errors in checkstyle JavadocMethod
 Key: FLINK-6716
 URL: https://issues.apache.org/jira/browse/FLINK-6716
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


There occurs problems when adding {{@throws}} tag to Javadoc with a custom 
{{Exception}} as in {{flink-cep/Pattern:299}}.

I think if we add suppression of load errors to {{strict-checkstyle.xml}} we 
will not loose any real value and it is the only way I could find to resolve 
the issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6658) Use scala Collections in scala CEP API

2017-05-22 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6658:
---

 Summary: Use scala Collections in scala CEP API
 Key: FLINK-6658
 URL: https://issues.apache.org/jira/browse/FLINK-6658
 Project: Flink
  Issue Type: Bug
  Components: CEP
Affects Versions: 1.3.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6609) Wrong version assignment when multiple TAKEs transitions

2017-05-17 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6609:
---

 Summary: Wrong version assignment when multiple TAKEs transitions
 Key: FLINK-6609
 URL: https://issues.apache.org/jira/browse/FLINK-6609
 Project: Flink
  Issue Type: Bug
  Components: CEP
Affects Versions: 1.3.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
Priority: Blocker
 Fix For: 1.3.0


This test fails due to wrong version assignment for TAKEs from the same state.

{code}
@Test
public void testMultipleTakesVersionCollision() {
List> inputEvents = new ArrayList<>();

Event startEvent = new Event(40, "c", 1.0);
Event middleEvent1 = new Event(41, "a", 2.0);
Event middleEvent2 = new Event(41, "a", 3.0);
Event middleEvent3 = new Event(41, "a", 4.0);
Event middleEvent4 = new Event(41, "a", 5.0);
Event middleEvent5 = new Event(41, "a", 6.0);
Event end = new Event(44, "b", 5.0);

inputEvents.add(new StreamRecord<>(startEvent, 1));
inputEvents.add(new StreamRecord<>(middleEvent1, 3));
inputEvents.add(new StreamRecord<>(middleEvent2, 4));
inputEvents.add(new StreamRecord<>(middleEvent3, 5));
inputEvents.add(new StreamRecord<>(middleEvent4, 6));
inputEvents.add(new StreamRecord<>(middleEvent5, 7));
inputEvents.add(new StreamRecord<>(end, 10));

Pattern pattern = 
Pattern.begin("start").where(new SimpleCondition() {
private static final long serialVersionUID = 
5726188262756267490L;

@Override
public boolean filter(Event value) throws Exception {
return value.getName().equals("c");
}
}).followedBy("middle1").where(new SimpleCondition() {
private static final long serialVersionUID = 
5726188262756267490L;

@Override
public boolean filter(Event value) throws Exception {
return value.getName().equals("a");
}

}).oneOrMore().allowCombinations().followedBy("middle2").where(new 
SimpleCondition() {
private static final long serialVersionUID = 
5726188262756267490L;

@Override
public boolean filter(Event value) throws Exception {
return value.getName().equals("a");
}
}).oneOrMore().allowCombinations().followedBy("end").where(new 
SimpleCondition() {
private static final long serialVersionUID = 
5726188262756267490L;

@Override
public boolean filter(Event value) throws Exception {
return value.getName().equals("b");
}
});

NFA nfa = NFACompiler.compile(pattern, 
Event.createTypeSerializer(), false);

final List> resultingPatterns = 
feedNFA(inputEvents, nfa);

compareMaps(resultingPatterns, Lists.newArrayList(

Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, middleEvent3, middleEvent4, middleEvent5, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, middleEvent3, middleEvent4, middleEvent5, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, middleEvent3, middleEvent4, middleEvent5, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, middleEvent3, middleEvent4, middleEvent5, end),

Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, middleEvent3, middleEvent4, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, middleEvent4, middleEvent5, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, middleEvent3, middleEvent4, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, middleEvent3, middleEvent5, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent3, middleEvent4, middleEvent5, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent3, middleEvent4, middleEvent5, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, middleEvent3, middleEvent4, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, middleEvent3, middleEvent5, end),
Lists.newArrayList(startEvent, middleEvent1, 
middleEvent2, mi

<    1   2   3   4   5   6   7   >