[jira] [Created] (FLINK-7986) Introduce FilterSetOpTransposeRule to Flink

2017-11-04 Thread Ruidong Li (JIRA)
Ruidong Li created FLINK-7986:
-

 Summary: Introduce FilterSetOpTransposeRule to Flink
 Key: FLINK-7986
 URL: https://issues.apache.org/jira/browse/FLINK-7986
 Project: Flink
  Issue Type: Improvement
Reporter: Ruidong Li
Assignee: Ruidong Li
Priority: Trivial


A.unionAll(B).where.groupBy.select  
=>
A.where.unionAll(B.where).groupBy.select

this rule will reduce networkIO



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7985) Update findbugs-maven-plugin version to 3.0.2

2017-11-04 Thread Hai Zhou UTC+8 (JIRA)
Hai Zhou UTC+8 created FLINK-7985:
-

 Summary: Update findbugs-maven-plugin version to 3.0.2
 Key: FLINK-7985
 URL: https://issues.apache.org/jira/browse/FLINK-7985
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Affects Versions: 1.4.0
Reporter: Hai Zhou UTC+8
Assignee: Hai Zhou UTC+8
Priority: Major
 Fix For: 1.5.0


The findbug version used by flink is pretty old (1.3.9). The old version of 
Findbugs itself have some bugs (like 
http://sourceforge.net/p/findbugs/bugs/918/, hit by HADOOP-10474). and the 
latest version 3.0.2 fixed the "Missing test classes" issue 
(https://github.com/gleclaire/findbugs-maven-plugin/issues/15).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7984) Bump snappy-java to 1.1.4

2017-11-04 Thread Hai Zhou UTC+8 (JIRA)
Hai Zhou UTC+8 created FLINK-7984:
-

 Summary: Bump snappy-java to 1.1.4
 Key: FLINK-7984
 URL: https://issues.apache.org/jira/browse/FLINK-7984
 Project: Flink
  Issue Type: Improvement
Reporter: Hai Zhou UTC+8
Priority: Major


Upgrade the snappy java version to 1.1.4(the latest, May, 2017). The older 
version has some issues like memory leak 
(https://github.com/xerial/snappy-java/issues/91).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7983) Bump prometheus java client to 0.1.0

2017-11-04 Thread Hai Zhou UTC+8 (JIRA)
Hai Zhou UTC+8 created FLINK-7983:
-

 Summary: Bump prometheus java client to 0.1.0
 Key: FLINK-7983
 URL: https://issues.apache.org/jira/browse/FLINK-7983
 Project: Flink
  Issue Type: Wish
  Components: Build System
Affects Versions: 1.4.0
Reporter: Hai Zhou UTC+8
Assignee: Hai Zhou UTC+8
 Fix For: 1.5.0


Update the dependencies {{io.prometheus:simpleclient*}} version from 0.0.26 to 
0.1.0.
the version 0.1.0 have many improvements:
{noformat}
[FEATURE] Support gzip compression for HTTPServer
[FEATURE] Support running HTTPServer in daemon thread
[BUGFIX] Shutdown threadpool on stop() for HTTPServer
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7981) Bump commons-lang3 version to 3.6

2017-11-04 Thread Hai Zhou UTC+8 (JIRA)
Hai Zhou UTC+8 created FLINK-7981:
-

 Summary: Bump commons-lang3 version to 3.6
 Key: FLINK-7981
 URL: https://issues.apache.org/jira/browse/FLINK-7981
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Affects Versions: 1.4.0
Reporter: Hai Zhou UTC+8
Assignee: Hai Zhou UTC+8
 Fix For: 1.5.0


Update commons-lang3 from 3.3.2 to 3.6.

{{SerializationUtils.clone()}} of commons-lang3 (<3.5) has a bug that break 
thread safety, which gets stack sometimes caused by race condition of 
initializing hash map.
See https://issues.apache.org/jira/browse/LANG-1251.

**other**
[BEAM-2481:Update commons-lang3 dependency to version 
3.6|https://issues.apache.org/jira/browse/BEAM-2481]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7980) Bump joda-time to 2.9.9

2017-11-04 Thread Hai Zhou UTC+8 (JIRA)
Hai Zhou UTC+8 created FLINK-7980:
-

 Summary: Bump joda-time to 2.9.9
 Key: FLINK-7980
 URL: https://issues.apache.org/jira/browse/FLINK-7980
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Affects Versions: 1.4.0
Reporter: Hai Zhou UTC+8
Assignee: Hai Zhou UTC+8
Priority: Major
 Fix For: 1.5.0


joda-time is version 2.5(Oct, 2014), bumping to 2.9.9(the latest version). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7979) Use Log.*(Object, Throwable) overload to log exceptions

2017-11-04 Thread Hai Zhou UTC+8 (JIRA)
Hai Zhou UTC+8 created FLINK-7979:
-

 Summary: Use Log.*(Object, Throwable) overload to log exceptions
 Key: FLINK-7979
 URL: https://issues.apache.org/jira/browse/FLINK-7979
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.4.0
Reporter: Hai Zhou UTC+8
Assignee: Hai Zhou UTC+8
Priority: Critical
 Fix For: 1.5.0


I found some code that logging an exception, it converts the exception to 
string or call `.getMessage()`.

I think the better way is to use the Logger method overloads which take 
`Throwable` as a parameter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7978) Kafka011 exactly-once Producer sporadically fails to commit under high parallelism

2017-11-04 Thread Gary Yao (JIRA)
Gary Yao created FLINK-7978:
---

 Summary: Kafka011 exactly-once Producer sporadically fails to 
commit under high parallelism
 Key: FLINK-7978
 URL: https://issues.apache.org/jira/browse/FLINK-7978
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.4.0
Reporter: Gary Yao
Priority: Blocker
 Fix For: 1.4.0


The Kafka011 exactly-once producer sporadically fails to commit/confirm the 
first checkpoint. The behavior seems to be easier reproduced under high job 
parallelism.

*Logs/Stacktrace*
{noformat}
10:24:35,347 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator
 - Completed checkpoint 1 (191029 bytes in 1435 ms).
10:24:35,349 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 2/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-12], transactionStartTime=1509787474588} from 
checkpoint 1
10:24:35,349 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 1/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-8], transactionStartTime=1509787474393} from 
checkpoint 1
10:24:35,349 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 0/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-4], transactionStartTime=1509787474448} from 
checkpoint 1
10:24:35,350 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 6/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-34], transactionStartTime=1509787474742} from 
checkpoint 1
10:24:35,350 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 4/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-23], transactionStartTime=1509787474777} from 
checkpoint 1
10:24:35,353 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 10/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-52], transactionStartTime=1509787474930} from 
checkpoint 1
10:24:35,350 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 7/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-35], transactionStartTime=1509787474659} from 
checkpoint 1
10:24:35,349 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 5/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-25], transactionStartTime=1509787474652} from 
checkpoint 1
10:24:35,361 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 18/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-92], transactionStartTime=1509787475043} from 
checkpoint 1
10:24:35,349 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 3/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-15], transactionStartTime=1509787474590} from 
checkpoint 1
10:24:35,361 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 13/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-67], transactionStartTime=1509787474962} from 
checkpoint 1
10:24:35,359 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 20/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
kafka-sink-1509787467330-104], transactionStartTime=1509787474654} from 
checkpoint 1
10:24:35,359 INFO  
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
FlinkKafkaProducer011 19/32 - checkpoint 1 complete, committing transaction 
TransactionHolder{handle=KafkaTransactionState