[jira] [Created] (FLINK-7907) Flink Metrics documentation missing Scala examples

2017-10-23 Thread Colin Williams (JIRA)
Colin Williams created FLINK-7907:
-

 Summary: Flink Metrics documentation missing Scala examples
 Key: FLINK-7907
 URL: https://issues.apache.org/jira/browse/FLINK-7907
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Colin Williams
Priority: Minor


The Flink metrics documentation is missing Scala examples for many of the 
metrics types. To be consistent there should be Scala examples for all the 
types.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7906) HadoopS3FileSystemITCase flaky on Travis

2017-10-23 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7906:


 Summary: HadoopS3FileSystemITCase flaky on Travis
 Key: FLINK-7906
 URL: https://issues.apache.org/jira/browse/FLINK-7906
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Priority: Critical


The {{HadoopS3FileSystemITCase}} is flaky on Travis because it its access was 
denied by S3.

https://travis-ci.org/tillrohrmann/flink/jobs/291491026



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7905) HadoopS3FileSystemITCase failed on travis

2017-10-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7905:
---

 Summary: HadoopS3FileSystemITCase failed on travis
 Key: FLINK-7905
 URL: https://issues.apache.org/jira/browse/FLINK-7905
 Project: Flink
  Issue Type: Bug
  Components: FileSystem, Tests
Affects Versions: 1.4.0
 Environment: https://travis-ci.org/zentol/flink/jobs/291550295
Reporter: Chesnay Schepler


{code}
---
 T E S T S
---
Running org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
Tests run: 3, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 3.354 sec <<< 
FAILURE! - in org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
testDirectoryListing(org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase)  
Time elapsed: 0.208 sec  <<< ERROR!
java.nio.file.AccessDeniedException: 
s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/testdir: getFileStatus 
on s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/testdir: 
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 9094999D7456C589), 
S3 Extended Request ID: 
fVIcROQh4E1/GjWYYV6dFp851rjiKtFgNSCO8KkoTmxWbuxz67aDGqRiA/a09q7KS6Mz1Tnyab4=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1579)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1249)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4141)
at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1256)
at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1232)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:904)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1553)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:117)
at 
org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.getFileStatus(HadoopFileSystem.java:77)
at org.apache.flink.core.fs.FileSystem.exists(FileSystem.java:509)
at 
org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase.testDirectoryListing(HadoopS3FileSystemITCase.java:163)

testSimpleFileWriteAndRead(org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase)
  Time elapsed: 0.275 sec  <<< ERROR!
java.nio.file.AccessDeniedException: 
s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/test.txt: 
getFileStatus on 
s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/test.txt: 
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: B3D8126BE6CF169F), 
S3 Extended Request ID: 
T34sn+a/CcCFv+kFR/UbfozAkXXtiLDu2N31Ok5EydgKeJF5I2qXRCC/MkxSi4ymiiVWeSyb8FY=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1579)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1249)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4141)
at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMeta

[jira] [Created] (FLINK-7904) Enable Flip6 build profile on Travis

2017-10-23 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7904:


 Summary: Enable Flip6 build profile on Travis
 Key: FLINK-7904
 URL: https://issues.apache.org/jira/browse/FLINK-7904
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.5.0


In order to continuously test Flip-6 components, we should add a new Travis 
build matrix entry which runs the Flip-6 test cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7903) Add Flip6 build profile

2017-10-23 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7903:


 Summary: Add Flip6 build profile
 Key: FLINK-7903
 URL: https://issues.apache.org/jira/browse/FLINK-7903
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.5.0


In order to separate Flip-6 related from non Flip-6 related test cases we 
should introduce a flip-6 build profile which runs only the flip-6 related 
tests. I suggest to use JUnit's {{Category}} for that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7902) TwoPhaseCommitSinkFunctions should use custom TypeSerializer

2017-10-23 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-7902:
---

 Summary: TwoPhaseCommitSinkFunctions should use custom 
TypeSerializer
 Key: FLINK-7902
 URL: https://issues.apache.org/jira/browse/FLINK-7902
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.4.0
Reporter: Aljoscha Krettek
Assignee: Piotr Nowojski
Priority: Blocker
 Fix For: 1.4.0


Currently, the {{FlinkKafkaProducer011}} uses {{TypeInformation.of(new 
TypeHint>() {})}} to 
create a {{TypeInformation}} which in turn is used to create a 
{{StateDescriptor}} for the state that the Kafka sink stores.

Behind the scenes, this would be roughly analysed as a 
{{PojoType(GenericType, 
GenericType)}} which means we don't have explicit 
control over the serialisation format and we also use Kryo (which is the 
default for {{GenericTypeInfo}}). This can be problematic if we want to evolve 
the state schema in the future or if we want to change Kryo versions.

We should change {{TwoPhaseCommitSinkFunction}} to only have this constructor:
{code}
public TwoPhaseCommitSinkFunction(TypeSerializer> 
stateSerializer) {
{code}
and we should then change the {{FlinkKafkaProducer011}} to hand in a 
custom-made {{TypeSerializer}} for the state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7901) Detecting whether an operator is restored doesn't work with chained state (Flink 1.3)

2017-10-23 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-7901:
---

 Summary: Detecting whether an operator is restored doesn't work 
with chained state (Flink 1.3)
 Key: FLINK-7901
 URL: https://issues.apache.org/jira/browse/FLINK-7901
 Project: Flink
  Issue Type: Bug
  Components: DataStream API, State Backends, Checkpointing
Affects Versions: 1.4.0, 1.3.2
Reporter: Aljoscha Krettek
Assignee: Piotr Nowojski
Priority: Blocker
 Fix For: 1.4.0, 1.3.3
 Attachments: StreamingJob.java

Originally reported on the ML: 
https://lists.apache.org/thread.html/22a2cf83de3107aa81a03a921325a191c29df8aa8676798fcd497199@%3Cuser.flink.apache.org%3E

If we have a chain of operators where multiple of them have operator state, 
detection of the {{context.isRestored()}} flag (of {{CheckpointedFunction}}) 
does not work correctly. It's best exemplified using this minimal example where 
both the source and the flatMap have state:
{code}
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

env
.addSource(new MaSource()).uid("source-1")
.flatMap(new MaFlatMap()).uid("flatMap-1");

env.execute("testing");
{code}

If I do a savepoint with these UIDs, then change "source-1" to "source-2" and 
restore from the savepoint {{context.isRestored()}} still reports {{true}} for 
the source.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7900) Add a Rich KeySelector

2017-10-23 Thread Hai Zhou UTC+8 (JIRA)
Hai Zhou UTC+8 created FLINK-7900:
-

 Summary: Add a Rich KeySelector
 Key: FLINK-7900
 URL: https://issues.apache.org/jira/browse/FLINK-7900
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API
Reporter: Hai Zhou UTC+8
Priority: Critical


Currently, we just have a `KeySelector` Function, maybe we should add a 
`RichKeySelector` RichFunction, for the user to read some configuration 
information to build the keySelector they need.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)