[jira] [Created] (FLINK-10136) Add REPEAT supported in Table API and SQL

2018-08-14 Thread vinoyang (JIRA)
vinoyang created FLINK-10136:


 Summary: Add REPEAT supported in Table API and SQL
 Key: FLINK-10136
 URL: https://issues.apache.org/jira/browse/FLINK-10136
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Reporter: vinoyang
Assignee: vinoyang


Oracle : 
[https://docs.oracle.com/cd/E17952_01/mysql-5.1-en/string-functions.html#function_repeat]

MySql: 
https://dev.mysql.com/doc/refman/5.5/en/string-functions.html#function_repeat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10061) Fix unsupported reconfiguration in KafkaTableSink

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579346#comment-16579346
 ] 

ASF GitHub Bot commented on FLINK-10061:


fhueske commented on issue #6495: [FLINK-10061] [table] [kafka] Fix unsupported 
reconfiguration in KafkaTableSink
URL: https://github.com/apache/flink/pull/6495#issuecomment-412777614
 
 
   Hi @tragicjun, I'd prefer to remove the method but we should discuss this 
change on the dev mailing list and see what others think first.
   
   Thanks, Fabian


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Fix unsupported reconfiguration in KafkaTableSink
> -
>
> Key: FLINK-10061
> URL: https://issues.apache.org/jira/browse/FLINK-10061
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.6.0
>Reporter: Jun Zhang
>Assignee: Jun Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>
> When using KafkaTableSink in "table.writeToSink(), the following exception is 
> thrown:
> {quote} java.lang.UnsupportedOperationException: Reconfiguration of this sink 
> is not supported.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] fhueske commented on issue #6495: [FLINK-10061] [table] [kafka] Fix unsupported reconfiguration in KafkaTableSink

2018-08-14 Thread GitBox
fhueske commented on issue #6495: [FLINK-10061] [table] [kafka] Fix unsupported 
reconfiguration in KafkaTableSink
URL: https://github.com/apache/flink/pull/6495#issuecomment-412777614
 
 
   Hi @tragicjun, I'd prefer to remove the method but we should discuss this 
change on the dev mailing list and see what others think first.
   
   Thanks, Fabian


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2018-08-14 Thread Dawid Wysakowicz (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-6810:

Description: 
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

*How To Contribute a build-in scalar function*
Thank you very much for contributing a build-in function. In order to make sure 
your contributions are in a good direction, it is recommended to read the 
following instructions.
# Research the behavior of the function which you are going to contribute in 
major DBMSs. This is very important since we have to understand the exact 
semantics of the function.
# It is recommended to add function both for sql and table-api.
# Every scalar function should add TableAPI docs in  
{{./docs/dev/table/tableApi.md#built-in-functions}}. Add SQL docs in 
{{./docs/dev/table/sql.md#built-in-functions}}. When adding docs for table-api, 
you should add both scala docs and java docs. Make sure your description of the 
function is accurate. Please do not copy documentation from other projects. 
Especially if other projects are not Apache licensed.
# Take overflow, NullPointerException and other exceptions into consideration.
# Add unit tests for every new function and its supported APIs. Have a look at 
{{ScalarFunctionsTest}} for how to implement function tests.

Welcome anybody to add the sub-task about standard database scalar function.




  was:
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

*How To Contribute a build-in scalar function*
Thank you very much for contributing a build-in function. In order to make sure 
your contributions are in a good direction, it is recommended to read the 
following instructions.
# Research the behavior of the function which you are going to contribute in 
major DBMSs. This is very import since we have to understand the exact 
semantics of the function.
# It is recommended to add function both for sql and talbe-api.
# Every scalar function should add TableAPI docs in  
{{./docs/dev/table/tableApi.md#built-in-functions}}. Add SQL docs in 
{{./docs/dev/table/sql.md#built-in-functions}}. When add docs for table-api, 
you should add both scala docs and java docs. Make sure your description of the 
function is accurate. Please do not copy documentation from other projects. 
Especially if other projects are not Apache licensed.
# Take overflow, NullPointerException and other exceptions into consideration.
# Add unit tests for every new function and its supported APIs. Have a look at 
{{ScalarFunctionsTest}} for how to implement function tests.

Welcome anybody to add the sub-task about standard database scalar function.





> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: starter
>
> In this JIRA, will create some sub-task for add specific scalar function, 
> such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}},string-functions {{LPAD}}, etc. 
> *How To Contribute a build-in scalar function*
> Thank you very much for contributing a build-in function. In order to make 
> sure your contributions are in a good direction, it is recommended to read 
> the following instructions.
> # Research the behavior of the function which you are going to contribute in 
> major DBMSs. This is very important since we have to understand the exact 
> semantics of the function.
> # It is recommended to add function both for sql and table-api.
> # Every scalar function should add TableAPI docs in  
> {{./docs/dev/table/tableApi.md#built-in-functions}}. Add SQL docs in 
> {{./docs/dev/table/sql.md#built-in-functions}}. When adding docs for 
> table-api, you should add both scala docs and java docs. Make sure your 
> description of the function is accurate. Please do not copy documentation 
> from other projects. Especially if other projects are not Apache licensed.
> # Take overflow, NullPointerException and other exceptions into consideration.
> # Add unit tests for every new function and its supported APIs. Have a look 
> at {{ScalarFunctionsTest}} for how to implement function tests.
> Welcome anybody to add the sub-task about standard database scalar function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-9977) Refine the docs for Table/SQL built-in functions

2018-08-14 Thread Xingcan Cui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingcan Cui resolved FLINK-9977.

   Resolution: Fixed
Fix Version/s: 1.7.0

Fixed in 1.7.0 5ecdfaa6c1f52a424de7a6bc01c824a0d1f85bf3

> Refine the docs for Table/SQL built-in functions
> 
>
> Key: FLINK-9977
> URL: https://issues.apache.org/jira/browse/FLINK-9977
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Xingcan Cui
>Assignee: Xingcan Cui
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: Java.jpg, SQL.jpg, Scala.jpg
>
>
> There exist some syntax errors or inconsistencies in documents and Scala docs 
> of the Table/SQL built-in functions. This issue aims to make some 
> improvements to them.
> Also, according to FLINK-10103, we should use single quotes to express 
> strings in SQL. For example, CONCAT("AA", "BB", "CC") should be replaced with 
> CONCAT('AA', 'BB', 'CC'). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10137) YARN: Log completed Containers

2018-08-14 Thread Gary Yao (JIRA)
Gary Yao created FLINK-10137:


 Summary: YARN: Log completed Containers
 Key: FLINK-10137
 URL: https://issues.apache.org/jira/browse/FLINK-10137
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination, ResourceManager, YARN
Affects Versions: 1.6.0, 1.5.2
Reporter: Gary Yao
Assignee: Gary Yao


Currently the Flink logs do not reveal why a YARN container completed. 
{{YarnResourceManager}} should log the {{ContainerStatus}} when the YARN 
ResourceManager reports containers to be completed. 

*Acceptance Criteria*
* {{YarnResourceManager#onContainersCompleted(List)}} logs 
completed containers.
* {{ResourceManager#closeTaskManagerConnection(ResourceID, Exception)}} should 
always log the message in the exception.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2018-08-14 Thread Xingcan Cui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingcan Cui updated FLINK-6810:
---
Description: 
In this JIRA, we will create some sub-tasks for adding specific scalar 
functions such as mathematical-function {{LOG}}, date-functions
 {{DATEADD}}, string-functions {{LPAD}}, etc.

*How to contribute a build-in scalar function*
Thank you very much for contributing a build-in function. In order to make sure 
your contributions are in a good direction, it is recommended to read the 
following instructions.
 # Research the behavior of the function that you are going to contribute in 
major DBMSs. This is very important since we have to understand the exact 
semantics of the function.
 # It is recommended to add function for both sql and table-api (Java and 
Scala).
 # For every scalar function, add corresponding docs which should include a 
SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make sure 
your description of the function is accurate. Please do not copy documentation 
from other projects. Especially if other projects are not Apache licensed.
 # Take overflow, NullPointerException and other exceptions into consideration.
 # Add unit tests for every new function and its supported APIs. Have a look at 
{{ScalarFunctionsTest}} for how to implement function tests.

Welcome anybody to add the sub-task about standard database scalar function.

  was:
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

*How To Contribute a build-in scalar function*
Thank you very much for contributing a build-in function. In order to make sure 
your contributions are in a good direction, it is recommended to read the 
following instructions.
# Research the behavior of the function which you are going to contribute in 
major DBMSs. This is very important since we have to understand the exact 
semantics of the function.
# It is recommended to add function both for sql and table-api.
# Every scalar function should add TableAPI docs in  
{{./docs/dev/table/tableApi.md#built-in-functions}}. Add SQL docs in 
{{./docs/dev/table/sql.md#built-in-functions}}. When adding docs for table-api, 
you should add both scala docs and java docs. Make sure your description of the 
function is accurate. Please do not copy documentation from other projects. 
Especially if other projects are not Apache licensed.
# Take overflow, NullPointerException and other exceptions into consideration.
# Add unit tests for every new function and its supported APIs. Have a look at 
{{ScalarFunctionsTest}} for how to implement function tests.

Welcome anybody to add the sub-task about standard database scalar function.





> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: starter
>
> In this JIRA, we will create some sub-tasks for adding specific scalar 
> functions such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}}, string-functions {{LPAD}}, etc.
> *How to contribute a build-in scalar function*
> Thank you very much for contributing a build-in function. In order to make 
> sure your contributions are in a good direction, it is recommended to read 
> the following instructions.
>  # Research the behavior of the function that you are going to contribute in 
> major DBMSs. This is very important since we have to understand the exact 
> semantics of the function.
>  # It is recommended to add function for both sql and table-api (Java and 
> Scala).
>  # For every scalar function, add corresponding docs which should include a 
> SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make 
> sure your description of the function is accurate. Please do not copy 
> documentation from other projects. Especially if other projects are not 
> Apache licensed.
>  # Take overflow, NullPointerException and other exceptions into 
> consideration.
>  # Add unit tests for every new function and its supported APIs. Have a look 
> at {{ScalarFunctionsTest}} for how to implement function tests.
> Welcome anybody to add the sub-task about standard database scalar function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10138) Queryable state (rocksdb) end-to-end test failed on Travis

2018-08-14 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-10138:
-

 Summary: Queryable state (rocksdb) end-to-end test failed on Travis
 Key: FLINK-10138
 URL: https://issues.apache.org/jira/browse/FLINK-10138
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.7.0
Reporter: Till Rohrmann
 Fix For: 1.7.0


The {{Queryable state (rocksdb) end-to-end test}} failed on Travis with the 
following exception
{code}
Exception in thread "main" java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: Failed request 54.
 Caused by: java.lang.RuntimeException: Failed request 54.
 Caused by: java.lang.RuntimeException: Error while processing request with ID 
54. Caused by: org.apache.flink.util.FlinkRuntimeException: Error while 
deserializing the user value.
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getValue(RocksDBMapState.java:446)
at 
org.apache.flink.queryablestate.client.state.serialization.KvStateSerializer.serializeMap(KvStateSerializer.java:222)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.getSerializedValue(RocksDBMapState.java:296)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.getSerializedValue(KvStateServerHandler.java:107)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:84)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.EOFException: No more bytes left.
at 
org.apache.flink.api.java.typeutils.runtime.NoFetchingInput.require(NoFetchingInput.java:79)
at com.esotericsoftware.kryo.io.Input.readString(Input.java:452)
at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:132)
at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:752)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
at 
org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:411)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.deserializeUserValue(RocksDBMapState.java:347)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.access$100(RocksDBMapState.java:66)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getValue(RocksDBMapState.java:444)
... 11 more

at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:95)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.lambda$run$0(AbstractServerHandler.java:273)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:778)
at 
java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2140)
at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

at 
java.util.

[jira] [Updated] (FLINK-10138) Queryable state (rocksdb) end-to-end test failed on Travis

2018-08-14 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-10138:
--
Description: 
The {{Queryable state (rocksdb) end-to-end test}} failed on Travis with the 
following exception
{code}
Exception in thread "main" java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: Failed request 54.
 Caused by: java.lang.RuntimeException: Failed request 54.
 Caused by: java.lang.RuntimeException: Error while processing request with ID 
54. Caused by: org.apache.flink.util.FlinkRuntimeException: Error while 
deserializing the user value.
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getValue(RocksDBMapState.java:446)
at 
org.apache.flink.queryablestate.client.state.serialization.KvStateSerializer.serializeMap(KvStateSerializer.java:222)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.getSerializedValue(RocksDBMapState.java:296)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.getSerializedValue(KvStateServerHandler.java:107)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:84)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.EOFException: No more bytes left.
at 
org.apache.flink.api.java.typeutils.runtime.NoFetchingInput.require(NoFetchingInput.java:79)
at com.esotericsoftware.kryo.io.Input.readString(Input.java:452)
at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:132)
at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:752)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
at 
org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:411)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.deserializeUserValue(RocksDBMapState.java:347)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.access$100(RocksDBMapState.java:66)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getValue(RocksDBMapState.java:444)
... 11 more

at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:95)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.lambda$run$0(AbstractServerHandler.java:273)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:778)
at 
java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2140)
at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at 
org.apache.flink.streaming.tests.queryablestate.QsStateC

[GitHub] twalthr commented on issue #6337: [FLINK-9853] [table] Add HEX support

2018-08-14 Thread GitBox
twalthr commented on issue #6337: [FLINK-9853] [table] Add HEX support 
URL: https://github.com/apache/flink/pull/6337#issuecomment-412784892
 
 
   Thank you for the quick update @xueyumusic. I will merge this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9853) add hex support in table api and sql

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579372#comment-16579372
 ] 

ASF GitHub Bot commented on FLINK-9853:
---

twalthr commented on issue #6337: [FLINK-9853] [table] Add HEX support 
URL: https://github.com/apache/flink/pull/6337#issuecomment-412784892
 
 
   Thank you for the quick update @xueyumusic. I will merge this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> add hex support in table api and sql
> 
>
> Key: FLINK-9853
> URL: https://issues.apache.org/jira/browse/FLINK-9853
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: xueyu
>Priority: Major
>  Labels: pull-request-available
>
> like in mysql, HEX could take int or string arguments, For a integer argument 
> N, it returns a hexadecimal string representation of the value of N. For a 
> string argument str, it returns a hexadecimal string representation of str 
> where each byte of each character in str is converted to two hexadecimal 
> digits. 
> Syntax:
> HEX(100) = 64
> HEX('This is a test String.') = '546869732069732061207465737420537472696e672e'
> See more: [link 
> MySQL|https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_hex]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10138) Queryable state (rocksdb) end-to-end test failed on Travis

2018-08-14 Thread Chesnay Schepler (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579380#comment-16579380
 ] 

Chesnay Schepler commented on FLINK-10138:
--

The queryable state test doesn't use the general purpose testing job, how could 
it be affected by FLINK-8993?

> Queryable state (rocksdb) end-to-end test failed on Travis
> --
>
> Key: FLINK-10138
> URL: https://issues.apache.org/jira/browse/FLINK-10138
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.7.0
>
>
> The {{Queryable state (rocksdb) end-to-end test}} failed on Travis with the 
> following exception
> {code}
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: Failed request 54.
>  Caused by: java.lang.RuntimeException: Failed request 54.
>  Caused by: java.lang.RuntimeException: Error while processing request with 
> ID 54. Caused by: org.apache.flink.util.FlinkRuntimeException: Error while 
> deserializing the user value.
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getValue(RocksDBMapState.java:446)
>   at 
> org.apache.flink.queryablestate.client.state.serialization.KvStateSerializer.serializeMap(KvStateSerializer.java:222)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.getSerializedValue(RocksDBMapState.java:296)
>   at 
> org.apache.flink.queryablestate.server.KvStateServerHandler.getSerializedValue(KvStateServerHandler.java:107)
>   at 
> org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:84)
>   at 
> org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
>   at 
> org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.EOFException: No more bytes left.
>   at 
> org.apache.flink.api.java.typeutils.runtime.NoFetchingInput.require(NoFetchingInput.java:79)
>   at com.esotericsoftware.kryo.io.Input.readString(Input.java:452)
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:132)
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:752)
>   at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
>   at 
> org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:411)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.deserializeUserValue(RocksDBMapState.java:347)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.access$100(RocksDBMapState.java:66)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getValue(RocksDBMapState.java:444)
>   ... 11 more
>   at 
> org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:95)
>   at 
> org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
>   at 
> org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.lambda$run$0(AbstractServerHandler.java:273)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:778)
>   at 
> java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2140)
>   at 
> org.apache.flink.query

[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2018-08-14 Thread Xingcan Cui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingcan Cui updated FLINK-6810:
---
Attachment: how to add a scalar function.png

> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: starter
> Attachments: how to add a scalar function.png
>
>
> In this JIRA, we will create some sub-tasks for adding specific scalar 
> functions such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}}, string-functions {{LPAD}}, etc.
> *How to contribute a build-in scalar function*
> Thank you very much for contributing a build-in function. In order to make 
> sure your contributions are in a good direction, it is recommended to read 
> the following instructions.
>  # Research the behavior of the function that you are going to contribute in 
> major DBMSs. This is very important since we have to understand the exact 
> semantics of the function.
>  # It is recommended to add function for both sql and table-api (Java and 
> Scala).
>  # For every scalar function, add corresponding docs which should include a 
> SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make 
> sure your description of the function is accurate. Please do not copy 
> documentation from other projects. Especially if other projects are not 
> Apache licensed.
>  # Take overflow, NullPointerException and other exceptions into 
> consideration.
>  # Add unit tests for every new function and its supported APIs. Have a look 
> at {{ScalarFunctionsTest}} for how to implement function tests.
> Welcome anybody to add the sub-task about standard database scalar function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol closed pull request #5985: [FLINK-8135][REST][docs] Add description to MessageParameter

2018-08-14 Thread GitBox
zentol closed pull request #5985: [FLINK-8135][REST][docs] Add description to 
MessageParameter
URL: https://github.com/apache/flink/pull/5985
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/_includes/generated/rest_dispatcher.html 
b/docs/_includes/generated/rest_dispatcher.html
index ab0f4e0a0a4..845dd8bebdf 100644
--- a/docs/_includes/generated/rest_dispatcher.html
+++ b/docs/_includes/generated/rest_dispatcher.html
@@ -234,7 +234,7 @@
 
   
 
-jarid - description
+jarid - String value that identifies a jar. When uploading 
the jar a path is returned, where the filename is the ID. This value is 
equivalent to the `id` field in the list of uploaded jars (/jars).
 
   
 
@@ -280,7 +280,7 @@
 
   
 
-jarid - description
+jarid - String value that identifies a jar. When uploading 
the jar a path is returned, where the filename is the ID. This value is 
equivalent to the `id` field in the list of uploaded jars (/jars).
 
   
 
@@ -290,9 +290,9 @@
 
   
 
-entry-class (optional): description
-parallelism (optional): description
-program-args (optional): description
+entry-class (optional): String value that specifies the fully 
qualified name of the entry point class. Overrides the class defined in the jar 
file manifest.
+parallelism (optional): Positive integer value that specifies 
the desired parallelism for the job.
+program-args (optional): String value that specifies the 
arguments for the program or plan.
 
   
 
@@ -314,7 +314,13 @@
   
 
 {
-  "type" : "any"
+  "type" : "object",
+  "id" : "urn:jsonschema:org:apache:flink:runtime:rest:messages:JobPlanInfo",
+  "properties" : {
+"plan" : {
+  "type" : "any"
+}
+  }
 }
   
  
@@ -340,7 +346,7 @@
 
   
 
-jarid - description
+jarid - String value that identifies a jar. When uploading 
the jar a path is returned, where the filename is the ID. This value is 
equivalent to the `id` field in the list of uploaded jars (/jars).
 
   
 
@@ -350,11 +356,11 @@
 
   
 
-program-args (optional): description
-entry-class (optional): description
-parallelism (optional): description
-allowNonRestoredState (optional): description
-savepointPath (optional): description
+program-args (optional): String value that specifies the 
arguments for the program or plan.
+entry-class (optional): String value that specifies the fully 
qualified name of the entry point class. Overrides the class defined in the jar 
file manifest.
+parallelism (optional): Positive integer value that specifies 
the desired parallelism for the job.
+allowNonRestoredState (optional): Boolean value that 
specifies whether the job submission should be rejected if the savepoint 
contains state that cannot be mapped back to the job.
+savepointPath (optional): String value that specifies the 
path of the savepoint to restore the job from.
 
   
 
@@ -478,7 +484,7 @@
 
   
 
-get (optional): description
+get (optional): Comma-separated list of string values to 
select specific metrics.
 
   
 
@@ -575,7 +581,7 @@
   Response code: 202 Accepted
 
 
-  Submits a job. This call is primarily intended to be 
used by the Flink client. This call expects amultipart/form-data request that 
consists of file uploads for the serialized JobGraph, jars anddistributed cache 
artifacts and an attribute named "request"for the JSON payload.
+  Submits a job. This call is primarily intended to be 
used by the Flink client. This call expects a multipart/form-data request that 
consists of file uploads for the serialized JobGraph, jars and distributed 
cache artifacts and an attribute named "request" for the JSON payload.
 
 
   
@@ -656,9 +662,9 @@
 
   
 
-get (optional): description
-agg (optional): description
-jobs (optional): description
+get (optional): Comma-separated list of string values to 
select specific metrics.
+agg (optional): Comma-separated list of aggregation modes 
which should be calculated. Available aggregations are: "min, max, sum, 
avg".
+jobs (optional): Comma-separated list of 32-character 
hexadecimal strings to select specific jobs.
 
   
 
@@ -753,7 +759,7 @@
 
   
 
-jobid - description
+jobid - 32-character hexadecimal string value that identifies 
a job.
 
   
 
@@ -911,7 +917,7 @@
 
   
 
-jobid - description
+jobid - 32-character hexadecimal string value that identifies 
a job.
 
   
 
@@ -921,7 +927,7 @@
 
   
 
-mode (optional): description
+mode (optional): S

[jira] [Updated] (FLINK-8135) Add description to MessageParameter

2018-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-8135:
--
Labels: pull-request-available  (was: )

> Add description to MessageParameter
> ---
>
> Key: FLINK-8135
> URL: https://issues.apache.org/jira/browse/FLINK-8135
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, REST
>Reporter: Chesnay Schepler
>Assignee: Andrei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> For documentation purposes we should add an {{getDescription()}} method to 
> the {{MessageParameter}} class, describing what this particular parameter is 
> used for and which values are accepted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2018-08-14 Thread Xingcan Cui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingcan Cui updated FLINK-6810:
---
Description: 
In this JIRA, we will create some sub-tasks for adding specific scalar 
functions such as mathematical-function {{LOG}}, date-functions
 {{DATEADD}}, string-functions {{LPAD}}, etc.

*How to contribute a built-in scalar function*
Thank you very much for contributing a built-in function. In order to make sure 
your contributions are in a good direction, it is recommended to read the 
following instructions.
 # Investigate the behavior of the function that you are going to contribute in 
major DBMSs. This is very important since we have to understand the exact 
semantics of the function.
 # It is recommended to add function for both SQL and table-api (Java and 
Scala).
 # For every scalar function, add corresponding docs which should include a 
SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make sure 
your description of the function is accurate. Please do not simply copy 
documentation from other projects, especially if the projects are not Apache 
licensed.
 # Take overflow, NullPointerException and other exceptions into consideration.
 # Add unit tests for every new function and its supported APIs. Have a look at 
{{ScalarFunctionsTest}}, {{SqlExpressionTest}}, 
{{ScalaFunctionsValidationTest}}, etc. for how to implement function tests.

!how to add a scalar function.png!

Welcome anybody to add the sub-task about standard database scalar function.

  was:
In this JIRA, we will create some sub-tasks for adding specific scalar 
functions such as mathematical-function {{LOG}}, date-functions
 {{DATEADD}}, string-functions {{LPAD}}, etc.

*How to contribute a build-in scalar function*
Thank you very much for contributing a build-in function. In order to make sure 
your contributions are in a good direction, it is recommended to read the 
following instructions.
 # Research the behavior of the function that you are going to contribute in 
major DBMSs. This is very important since we have to understand the exact 
semantics of the function.
 # It is recommended to add function for both sql and table-api (Java and 
Scala).
 # For every scalar function, add corresponding docs which should include a 
SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make sure 
your description of the function is accurate. Please do not copy documentation 
from other projects. Especially if other projects are not Apache licensed.
 # Take overflow, NullPointerException and other exceptions into consideration.
 # Add unit tests for every new function and its supported APIs. Have a look at 
{{ScalarFunctionsTest}} for how to implement function tests.

Welcome anybody to add the sub-task about standard database scalar function.


> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: starter
> Attachments: how to add a scalar function.png
>
>
> In this JIRA, we will create some sub-tasks for adding specific scalar 
> functions such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}}, string-functions {{LPAD}}, etc.
> *How to contribute a built-in scalar function*
> Thank you very much for contributing a built-in function. In order to make 
> sure your contributions are in a good direction, it is recommended to read 
> the following instructions.
>  # Investigate the behavior of the function that you are going to contribute 
> in major DBMSs. This is very important since we have to understand the exact 
> semantics of the function.
>  # It is recommended to add function for both SQL and table-api (Java and 
> Scala).
>  # For every scalar function, add corresponding docs which should include a 
> SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make 
> sure your description of the function is accurate. Please do not simply copy 
> documentation from other projects, especially if the projects are not Apache 
> licensed.
>  # Take overflow, NullPointerException and other exceptions into 
> consideration.
>  # Add unit tests for every new function and its supported APIs. Have a look 
> at {{ScalarFunctionsTest}}, {{SqlExpressionTest}}, 
> {{ScalaFunctionsValidationTest}}, etc. for how to implement function tests.
> !how to add a scalar function.png!
> Welcome anybody to add the sub-task about standard database scalar function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8135) Add description to MessageParameter

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579382#comment-16579382
 ] 

ASF GitHub Bot commented on FLINK-8135:
---

zentol closed pull request #5985: [FLINK-8135][REST][docs] Add description to 
MessageParameter
URL: https://github.com/apache/flink/pull/5985
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/_includes/generated/rest_dispatcher.html 
b/docs/_includes/generated/rest_dispatcher.html
index ab0f4e0a0a4..845dd8bebdf 100644
--- a/docs/_includes/generated/rest_dispatcher.html
+++ b/docs/_includes/generated/rest_dispatcher.html
@@ -234,7 +234,7 @@
 
   
 
-jarid - description
+jarid - String value that identifies a jar. When uploading 
the jar a path is returned, where the filename is the ID. This value is 
equivalent to the `id` field in the list of uploaded jars (/jars).
 
   
 
@@ -280,7 +280,7 @@
 
   
 
-jarid - description
+jarid - String value that identifies a jar. When uploading 
the jar a path is returned, where the filename is the ID. This value is 
equivalent to the `id` field in the list of uploaded jars (/jars).
 
   
 
@@ -290,9 +290,9 @@
 
   
 
-entry-class (optional): description
-parallelism (optional): description
-program-args (optional): description
+entry-class (optional): String value that specifies the fully 
qualified name of the entry point class. Overrides the class defined in the jar 
file manifest.
+parallelism (optional): Positive integer value that specifies 
the desired parallelism for the job.
+program-args (optional): String value that specifies the 
arguments for the program or plan.
 
   
 
@@ -314,7 +314,13 @@
   
 
 {
-  "type" : "any"
+  "type" : "object",
+  "id" : "urn:jsonschema:org:apache:flink:runtime:rest:messages:JobPlanInfo",
+  "properties" : {
+"plan" : {
+  "type" : "any"
+}
+  }
 }
   
  
@@ -340,7 +346,7 @@
 
   
 
-jarid - description
+jarid - String value that identifies a jar. When uploading 
the jar a path is returned, where the filename is the ID. This value is 
equivalent to the `id` field in the list of uploaded jars (/jars).
 
   
 
@@ -350,11 +356,11 @@
 
   
 
-program-args (optional): description
-entry-class (optional): description
-parallelism (optional): description
-allowNonRestoredState (optional): description
-savepointPath (optional): description
+program-args (optional): String value that specifies the 
arguments for the program or plan.
+entry-class (optional): String value that specifies the fully 
qualified name of the entry point class. Overrides the class defined in the jar 
file manifest.
+parallelism (optional): Positive integer value that specifies 
the desired parallelism for the job.
+allowNonRestoredState (optional): Boolean value that 
specifies whether the job submission should be rejected if the savepoint 
contains state that cannot be mapped back to the job.
+savepointPath (optional): String value that specifies the 
path of the savepoint to restore the job from.
 
   
 
@@ -478,7 +484,7 @@
 
   
 
-get (optional): description
+get (optional): Comma-separated list of string values to 
select specific metrics.
 
   
 
@@ -575,7 +581,7 @@
   Response code: 202 Accepted
 
 
-  Submits a job. This call is primarily intended to be 
used by the Flink client. This call expects amultipart/form-data request that 
consists of file uploads for the serialized JobGraph, jars anddistributed cache 
artifacts and an attribute named "request"for the JSON payload.
+  Submits a job. This call is primarily intended to be 
used by the Flink client. This call expects a multipart/form-data request that 
consists of file uploads for the serialized JobGraph, jars and distributed 
cache artifacts and an attribute named "request" for the JSON payload.
 
 
   
@@ -656,9 +662,9 @@
 
   
 
-get (optional): description
-agg (optional): description
-jobs (optional): description
+get (optional): Comma-separated list of string values to 
select specific metrics.
+agg (optional): Comma-separated list of aggregation modes 
which should be calculated. Available aggregations are: "min, max, sum, 
avg".
+jobs (optional): Comma-separated list of 32-character 
hexadecimal strings to select specific jobs.
 
   
 
@@ -753,7 +759,7 @@
 
   
 
-jobid - description
+jobid - 32-character hexadecimal string value that identifies 
a job.
 
   
  

[jira] [Closed] (FLINK-8135) Add description to MessageParameter

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-8135.
---
Resolution: Fixed

master: b231c9ac49975ca4f37678dabae4149c90540ab3

> Add description to MessageParameter
> ---
>
> Key: FLINK-8135
> URL: https://issues.apache.org/jira/browse/FLINK-8135
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, REST
>Reporter: Chesnay Schepler
>Assignee: Andrei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> For documentation purposes we should add an {{getDescription()}} method to 
> the {{MessageParameter}} class, describing what this particular parameter is 
> used for and which values are accepted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8135) Add description to MessageParameter

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-8135:

Fix Version/s: (was: 1.5.3)
   (was: 1.6.1)

> Add description to MessageParameter
> ---
>
> Key: FLINK-8135
> URL: https://issues.apache.org/jira/browse/FLINK-8135
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, REST
>Reporter: Chesnay Schepler
>Assignee: Andrei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> For documentation purposes we should add an {{getDescription()}} method to 
> the {{MessageParameter}} class, describing what this particular parameter is 
> used for and which values are accepted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol closed pull request #6513: [FLINK-10089][tests] Update FlinkKafkaConsumerBaseMigrationTest for 1.5

2018-08-14 Thread GitBox
zentol closed pull request #6513: [FLINK-10089][tests] Update 
FlinkKafkaConsumerBaseMigrationTest for 1.5
URL: https://github.com/apache/flink/pull/6513
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseMigrationTest.java
 
b/flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseMigrationTest.java
index 768ac16547c..6776cd5eb22 100644
--- 
a/flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseMigrationTest.java
+++ 
b/flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseMigrationTest.java
@@ -93,7 +93,7 @@
 
@Parameterized.Parameters(name = "Migration Savepoint: {0}")
public static Collection parameters () {
-   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4);
+   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4, MigrationVersion.v1_5);
}
 
public FlinkKafkaConsumerBaseMigrationTest(MigrationVersion 
testMigrateVersion) {
diff --git 
a/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-empty-state-snapshot
 
b/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-empty-state-snapshot
new file mode 100644
index 000..affc8b9b756
Binary files /dev/null and 
b/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-empty-state-snapshot
 differ
diff --git 
a/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-snapshot
 
b/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-snapshot
new file mode 100644
index 000..773b363b859
Binary files /dev/null and 
b/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-snapshot
 differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #6512: [FLINK-10087][tests] Update BucketingSinkMigrationTest for 1.5

2018-08-14 Thread GitBox
zentol closed pull request #6512: [FLINK-10087][tests] Update 
BucketingSinkMigrationTest for 1.5
URL: https://github.com/apache/flink/pull/6512
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkMigrationTest.java
 
b/flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkMigrationTest.java
index 307f10fb558..38cc9149ee2 100644
--- 
a/flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkMigrationTest.java
+++ 
b/flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkMigrationTest.java
@@ -80,12 +80,16 @@ public static void verifyOS() {
Assume.assumeTrue("HDFS cluster cannot be started on Windows 
without extensions.", !OperatingSystem.isWindows());
}
 
+   /**
+* The bucket file prefix is the absolute path to the part files, which 
is stored within the savepoint.
+*/
@Parameterized.Parameters(name = "Migration Savepoint / Bucket Files 
Prefix: {0}")
public static Collection> parameters 
() {
return Arrays.asList(
Tuple2.of(MigrationVersion.v1_2, 
"/var/folders/v_/ry2wp5fx0y7c1rvr41xy9_70gn/T/junit9160378385359106772/junit479663758539998903/1970-01-01--01/part-0-"),
Tuple2.of(MigrationVersion.v1_3, 
"/var/folders/tv/b_1d8fvx23dgk1_xs8db_95hgn/T/junit4273542175898623023/junit3801102997056424640/1970-01-01--01/part-0-"),
-   Tuple2.of(MigrationVersion.v1_4, 
"/var/folders/tv/b_1d8fvx23dgk1_xs8db_95hgn/T/junit3198043255809479705/junit8947526563966405708/1970-01-01--01/part-0-"));
+   Tuple2.of(MigrationVersion.v1_4, 
"/var/folders/tv/b_1d8fvx23dgk1_xs8db_95hgn/T/junit3198043255809479705/junit8947526563966405708/1970-01-01--01/part-0-"),
+   Tuple2.of(MigrationVersion.v1_5, 
"/tmp/junit4927100426019463155/junit2465610012100182280/1970-01-01--00/part-0-"));
}
 
private final MigrationVersion testMigrateVersion;
diff --git 
a/flink-connectors/flink-connector-filesystem/src/test/resources/bucketing-sink-migration-test-flink1.5-snapshot
 
b/flink-connectors/flink-connector-filesystem/src/test/resources/bucketing-sink-migration-test-flink1.5-snapshot
new file mode 100644
index 000..eab6510f4fe
Binary files /dev/null and 
b/flink-connectors/flink-connector-filesystem/src/test/resources/bucketing-sink-migration-test-flink1.5-snapshot
 differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #6514: [FLINK-10090][tests] Update ContinuousFileProcessingMigrationTest for 1.5

2018-08-14 Thread GitBox
zentol closed pull request #6514: [FLINK-10090][tests] Update 
ContinuousFileProcessingMigrationTest for 1.5
URL: https://github.com/apache/flink/pull/6514
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileProcessingMigrationTest.java
 
b/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileProcessingMigrationTest.java
index d29ded567df..615ec2349c5 100644
--- 
a/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileProcessingMigrationTest.java
+++ 
b/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileProcessingMigrationTest.java
@@ -78,7 +78,8 @@
return Arrays.asList(
Tuple2.of(MigrationVersion.v1_2, 1493116191000L),
Tuple2.of(MigrationVersion.v1_3, 149653200L),
-   Tuple2.of(MigrationVersion.v1_4, 1516897628000L));
+   Tuple2.of(MigrationVersion.v1_4, 1516897628000L),
+   Tuple2.of(MigrationVersion.v1_5, 1533639934000L));
}
 
/**
diff --git 
a/flink-fs-tests/src/test/resources/monitoring-function-migration-test-1533639934000-flink1.5-snapshot
 
b/flink-fs-tests/src/test/resources/monitoring-function-migration-test-1533639934000-flink1.5-snapshot
new file mode 100644
index 000..d3959b4ebbb
Binary files /dev/null and 
b/flink-fs-tests/src/test/resources/monitoring-function-migration-test-1533639934000-flink1.5-snapshot
 differ
diff --git 
a/flink-fs-tests/src/test/resources/reader-migration-test-flink1.5-snapshot 
b/flink-fs-tests/src/test/resources/reader-migration-test-flink1.5-snapshot
new file mode 100644
index 000..6b56e8a5beb
Binary files /dev/null and 
b/flink-fs-tests/src/test/resources/reader-migration-test-flink1.5-snapshot 
differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10089) Update FlinkKafkaConsumerBaseMigrationTest

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579385#comment-16579385
 ] 

ASF GitHub Bot commented on FLINK-10089:


zentol closed pull request #6513: [FLINK-10089][tests] Update 
FlinkKafkaConsumerBaseMigrationTest for 1.5
URL: https://github.com/apache/flink/pull/6513
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseMigrationTest.java
 
b/flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseMigrationTest.java
index 768ac16547c..6776cd5eb22 100644
--- 
a/flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseMigrationTest.java
+++ 
b/flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseMigrationTest.java
@@ -93,7 +93,7 @@
 
@Parameterized.Parameters(name = "Migration Savepoint: {0}")
public static Collection parameters () {
-   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4);
+   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4, MigrationVersion.v1_5);
}
 
public FlinkKafkaConsumerBaseMigrationTest(MigrationVersion 
testMigrateVersion) {
diff --git 
a/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-empty-state-snapshot
 
b/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-empty-state-snapshot
new file mode 100644
index 000..affc8b9b756
Binary files /dev/null and 
b/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-empty-state-snapshot
 differ
diff --git 
a/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-snapshot
 
b/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-snapshot
new file mode 100644
index 000..773b363b859
Binary files /dev/null and 
b/flink-connectors/flink-connector-kafka-base/src/test/resources/kafka-consumer-migration-test-flink1.5-snapshot
 differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Update FlinkKafkaConsumerBaseMigrationTest
> --
>
> Key: FLINK-10089
> URL: https://issues.apache.org/jira/browse/FLINK-10089
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kafka Connector
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10087) Update BucketingSinkMigrationTest

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579384#comment-16579384
 ] 

ASF GitHub Bot commented on FLINK-10087:


zentol closed pull request #6512: [FLINK-10087][tests] Update 
BucketingSinkMigrationTest for 1.5
URL: https://github.com/apache/flink/pull/6512
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkMigrationTest.java
 
b/flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkMigrationTest.java
index 307f10fb558..38cc9149ee2 100644
--- 
a/flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkMigrationTest.java
+++ 
b/flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkMigrationTest.java
@@ -80,12 +80,16 @@ public static void verifyOS() {
Assume.assumeTrue("HDFS cluster cannot be started on Windows 
without extensions.", !OperatingSystem.isWindows());
}
 
+   /**
+* The bucket file prefix is the absolute path to the part files, which 
is stored within the savepoint.
+*/
@Parameterized.Parameters(name = "Migration Savepoint / Bucket Files 
Prefix: {0}")
public static Collection> parameters 
() {
return Arrays.asList(
Tuple2.of(MigrationVersion.v1_2, 
"/var/folders/v_/ry2wp5fx0y7c1rvr41xy9_70gn/T/junit9160378385359106772/junit479663758539998903/1970-01-01--01/part-0-"),
Tuple2.of(MigrationVersion.v1_3, 
"/var/folders/tv/b_1d8fvx23dgk1_xs8db_95hgn/T/junit4273542175898623023/junit3801102997056424640/1970-01-01--01/part-0-"),
-   Tuple2.of(MigrationVersion.v1_4, 
"/var/folders/tv/b_1d8fvx23dgk1_xs8db_95hgn/T/junit3198043255809479705/junit8947526563966405708/1970-01-01--01/part-0-"));
+   Tuple2.of(MigrationVersion.v1_4, 
"/var/folders/tv/b_1d8fvx23dgk1_xs8db_95hgn/T/junit3198043255809479705/junit8947526563966405708/1970-01-01--01/part-0-"),
+   Tuple2.of(MigrationVersion.v1_5, 
"/tmp/junit4927100426019463155/junit2465610012100182280/1970-01-01--00/part-0-"));
}
 
private final MigrationVersion testMigrateVersion;
diff --git 
a/flink-connectors/flink-connector-filesystem/src/test/resources/bucketing-sink-migration-test-flink1.5-snapshot
 
b/flink-connectors/flink-connector-filesystem/src/test/resources/bucketing-sink-migration-test-flink1.5-snapshot
new file mode 100644
index 000..eab6510f4fe
Binary files /dev/null and 
b/flink-connectors/flink-connector-filesystem/src/test/resources/bucketing-sink-migration-test-flink1.5-snapshot
 differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Update BucketingSinkMigrationTest
> -
>
> Key: FLINK-10087
> URL: https://issues.apache.org/jira/browse/FLINK-10087
> Project: Flink
>  Issue Type: Sub-task
>  Components: filesystem-connector
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol closed pull request #6515: [FLINK-10091][tests] Update WindowOperatorMigrationTest for 1.5

2018-08-14 Thread GitBox
zentol closed pull request #6515: [FLINK-10091][tests] Update 
WindowOperatorMigrationTest for 1.5
URL: https://github.com/apache/flink/pull/6515
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorMigrationTest.java
 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorMigrationTest.java
index 2f2c751764d..44fd3745242 100644
--- 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorMigrationTest.java
+++ 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorMigrationTest.java
@@ -86,7 +86,7 @@
 
@Parameterized.Parameters(name = "Migration Savepoint: {0}")
public static Collection parameters () {
-   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4);
+   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4, MigrationVersion.v1_5);
}
 
private static final TypeInformation> 
STRING_INT_TUPLE =
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-apply-event-time-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-apply-event-time-flink1.5-snapshot
new file mode 100644
index 000..2fd807545be
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-apply-event-time-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-apply-processing-time-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-apply-processing-time-flink1.5-snapshot
new file mode 100644
index 000..ccc84e15784
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-apply-processing-time-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-kryo-serialized-key-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-kryo-serialized-key-flink1.5-snapshot
new file mode 100644
index 000..b9ccd04c0d1
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-kryo-serialized-key-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-event-time-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-event-time-flink1.5-snapshot
new file mode 100644
index 000..22f5db639df
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-event-time-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-processing-time-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-processing-time-flink1.5-snapshot
new file mode 100644
index 000..10051805c52
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-processing-time-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-flink1.5-snapshot
new file mode 100644
index 000..9046b5bd6ad
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-mint-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-mint-flink1.5-snapshot
new file mode 100644
index 000..e585d53c83c
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-mint-flink1.5-snapshot
 differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2018-08-14 Thread Xingcan Cui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingcan Cui updated FLINK-6810:
---
Description: 
In this JIRA, we will create some sub-tasks for adding specific scalar 
functions such as mathematical-function {{LOG}}, date-functions
 {{DATEADD}}, string-functions {{LPAD}}, etc.

*How to contribute a built-in scalar function*
Thank you very much for contributing a built-in function. In order to make sure 
your contributions are in a good direction, it is recommended to read the 
following instructions.
 # Investigate the behavior of the function that you are going to contribute in 
major DBMSs. This is very important since we have to understand the exact 
semantics of the function.
 # It is recommended to add function for both SQL and table-API (Java and 
Scala).
 # For every scalar function, add corresponding docs which should include a 
SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make sure 
your description of the function is accurate. Please do not simply copy 
documentation from other projects, especially if the projects are not Apache 
licensed.
 # Take overflow, NullPointerException and other exceptions into consideration.
 # Add unit tests for every new function and its supported APIs. Have a look at 
{{ScalarFunctionsTest}}, {{SqlExpressionTest}}, 
{{ScalaFunctionsValidationTest}}, etc. for how to implement function tests.

!how to add a scalar function.png!

Welcome anybody to add the sub-task about standard database scalar function.

  was:
In this JIRA, we will create some sub-tasks for adding specific scalar 
functions such as mathematical-function {{LOG}}, date-functions
 {{DATEADD}}, string-functions {{LPAD}}, etc.

*How to contribute a built-in scalar function*
Thank you very much for contributing a built-in function. In order to make sure 
your contributions are in a good direction, it is recommended to read the 
following instructions.
 # Investigate the behavior of the function that you are going to contribute in 
major DBMSs. This is very important since we have to understand the exact 
semantics of the function.
 # It is recommended to add function for both SQL and table-api (Java and 
Scala).
 # For every scalar function, add corresponding docs which should include a 
SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make sure 
your description of the function is accurate. Please do not simply copy 
documentation from other projects, especially if the projects are not Apache 
licensed.
 # Take overflow, NullPointerException and other exceptions into consideration.
 # Add unit tests for every new function and its supported APIs. Have a look at 
{{ScalarFunctionsTest}}, {{SqlExpressionTest}}, 
{{ScalaFunctionsValidationTest}}, etc. for how to implement function tests.

!how to add a scalar function.png!

Welcome anybody to add the sub-task about standard database scalar function.


> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: starter
> Attachments: how to add a scalar function.png
>
>
> In this JIRA, we will create some sub-tasks for adding specific scalar 
> functions such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}}, string-functions {{LPAD}}, etc.
> *How to contribute a built-in scalar function*
> Thank you very much for contributing a built-in function. In order to make 
> sure your contributions are in a good direction, it is recommended to read 
> the following instructions.
>  # Investigate the behavior of the function that you are going to contribute 
> in major DBMSs. This is very important since we have to understand the exact 
> semantics of the function.
>  # It is recommended to add function for both SQL and table-API (Java and 
> Scala).
>  # For every scalar function, add corresponding docs which should include a 
> SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make 
> sure your description of the function is accurate. Please do not simply copy 
> documentation from other projects, especially if the projects are not Apache 
> licensed.
>  # Take overflow, NullPointerException and other exceptions into 
> consideration.
>  # Add unit tests for every new function and its supported APIs. Have a look 
> at {{ScalarFunctionsTest}}, {{SqlExpressionTest}}, 
> {{ScalaFunctionsValidationTest}}, etc. for how to implement function tests.
> !how to add a scalar function.png!
> Welcome anybody to add the sub-task about standard database scalar function.



--
This message was se

[GitHub] zentol closed pull request #6516: [FLINK-10092][tests] Update StatefulJobSavepointMigrationITCase for 1.5

2018-08-14 Thread GitBox
zentol closed pull request #6516: [FLINK-10092][tests] Update 
StatefulJobSavepointMigrationITCase for 1.5
URL: https://github.com/apache/flink/pull/6516
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/checkpointing/utils/StatefulJobSavepointMigrationITCase.java
 
b/flink-tests/src/test/java/org/apache/flink/test/checkpointing/utils/StatefulJobSavepointMigrationITCase.java
index c74f304e45a..e8a47b06a28 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/checkpointing/utils/StatefulJobSavepointMigrationITCase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/checkpointing/utils/StatefulJobSavepointMigrationITCase.java
@@ -78,7 +78,9 @@
public static Collection> parameters 
() {
return Arrays.asList(
Tuple2.of(MigrationVersion.v1_4, 
StateBackendLoader.MEMORY_STATE_BACKEND_NAME),
-   Tuple2.of(MigrationVersion.v1_4, 
StateBackendLoader.ROCKSDB_STATE_BACKEND_NAME));
+   Tuple2.of(MigrationVersion.v1_4, 
StateBackendLoader.ROCKSDB_STATE_BACKEND_NAME),
+   Tuple2.of(MigrationVersion.v1_5, 
StateBackendLoader.MEMORY_STATE_BACKEND_NAME),
+   Tuple2.of(MigrationVersion.v1_5, 
StateBackendLoader.ROCKSDB_STATE_BACKEND_NAME));
}
 
private final MigrationVersion testMigrateVersion;
diff --git 
a/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-rocksdb-savepoint/_metadata
 
b/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-rocksdb-savepoint/_metadata
new file mode 100644
index 000..812d581a0a1
Binary files /dev/null and 
b/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-rocksdb-savepoint/_metadata
 differ
diff --git 
a/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-savepoint/_metadata
 
b/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-savepoint/_metadata
new file mode 100644
index 000..f9eb3180fbc
Binary files /dev/null and 
b/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-savepoint/_metadata
 differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10090) Update ContinuousFileProcessingMigrationTest

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579387#comment-16579387
 ] 

ASF GitHub Bot commented on FLINK-10090:


zentol closed pull request #6514: [FLINK-10090][tests] Update 
ContinuousFileProcessingMigrationTest for 1.5
URL: https://github.com/apache/flink/pull/6514
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileProcessingMigrationTest.java
 
b/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileProcessingMigrationTest.java
index d29ded567df..615ec2349c5 100644
--- 
a/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileProcessingMigrationTest.java
+++ 
b/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileProcessingMigrationTest.java
@@ -78,7 +78,8 @@
return Arrays.asList(
Tuple2.of(MigrationVersion.v1_2, 1493116191000L),
Tuple2.of(MigrationVersion.v1_3, 149653200L),
-   Tuple2.of(MigrationVersion.v1_4, 1516897628000L));
+   Tuple2.of(MigrationVersion.v1_4, 1516897628000L),
+   Tuple2.of(MigrationVersion.v1_5, 1533639934000L));
}
 
/**
diff --git 
a/flink-fs-tests/src/test/resources/monitoring-function-migration-test-1533639934000-flink1.5-snapshot
 
b/flink-fs-tests/src/test/resources/monitoring-function-migration-test-1533639934000-flink1.5-snapshot
new file mode 100644
index 000..d3959b4ebbb
Binary files /dev/null and 
b/flink-fs-tests/src/test/resources/monitoring-function-migration-test-1533639934000-flink1.5-snapshot
 differ
diff --git 
a/flink-fs-tests/src/test/resources/reader-migration-test-flink1.5-snapshot 
b/flink-fs-tests/src/test/resources/reader-migration-test-flink1.5-snapshot
new file mode 100644
index 000..6b56e8a5beb
Binary files /dev/null and 
b/flink-fs-tests/src/test/resources/reader-migration-test-flink1.5-snapshot 
differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Update ContinuousFileProcessingMigrationTest
> 
>
> Key: FLINK-10090
> URL: https://issues.apache.org/jira/browse/FLINK-10090
> Project: Flink
>  Issue Type: Sub-task
>  Components: filesystem-connector
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10091) Update WindowOperatorMigrationTest

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579388#comment-16579388
 ] 

ASF GitHub Bot commented on FLINK-10091:


zentol closed pull request #6515: [FLINK-10091][tests] Update 
WindowOperatorMigrationTest for 1.5
URL: https://github.com/apache/flink/pull/6515
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorMigrationTest.java
 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorMigrationTest.java
index 2f2c751764d..44fd3745242 100644
--- 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorMigrationTest.java
+++ 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorMigrationTest.java
@@ -86,7 +86,7 @@
 
@Parameterized.Parameters(name = "Migration Savepoint: {0}")
public static Collection parameters () {
-   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4);
+   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4, MigrationVersion.v1_5);
}
 
private static final TypeInformation> 
STRING_INT_TUPLE =
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-apply-event-time-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-apply-event-time-flink1.5-snapshot
new file mode 100644
index 000..2fd807545be
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-apply-event-time-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-apply-processing-time-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-apply-processing-time-flink1.5-snapshot
new file mode 100644
index 000..ccc84e15784
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-apply-processing-time-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-kryo-serialized-key-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-kryo-serialized-key-flink1.5-snapshot
new file mode 100644
index 000..b9ccd04c0d1
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-kryo-serialized-key-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-event-time-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-event-time-flink1.5-snapshot
new file mode 100644
index 000..22f5db639df
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-event-time-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-processing-time-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-processing-time-flink1.5-snapshot
new file mode 100644
index 000..10051805c52
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-reduce-processing-time-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-flink1.5-snapshot
new file mode 100644
index 000..9046b5bd6ad
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-flink1.5-snapshot
 differ
diff --git 
a/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-mint-flink1.5-snapshot
 
b/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-mint-flink1.5-snapshot
new file mode 100644
index 000..e585d53c83c
Binary files /dev/null and 
b/flink-streaming-java/src/test/resources/win-op-migration-test-session-with-stateful-trigger-mint-flink1.5-snapshot
 differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Update WindowOperatorMigrationTest
> ---

[GitHub] zentol closed pull request #6511: [FLINK-10085][tests] Update AbstractOperatorRestoreTestBase for 1.5

2018-08-14 Thread GitBox
zentol closed pull request #6511: [FLINK-10085][tests] Update 
AbstractOperatorRestoreTestBase for 1.5
URL: https://github.com/apache/flink/pull/6511
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/AbstractKeyedOperatorRestoreTestBase.java
 
b/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/AbstractKeyedOperatorRestoreTestBase.java
index 36301dfd77f..0e582694362 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/AbstractKeyedOperatorRestoreTestBase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/AbstractKeyedOperatorRestoreTestBase.java
@@ -41,7 +41,7 @@
 
@Parameterized.Parameters(name = "Migrate Savepoint: {0}")
public static Collection parameters () {
-   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4);
+   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4, MigrationVersion.v1_5);
}
 
public AbstractKeyedOperatorRestoreTestBase(MigrationVersion 
migrationVersion) {
diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/unkeyed/AbstractNonKeyedOperatorRestoreTestBase.java
 
b/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/unkeyed/AbstractNonKeyedOperatorRestoreTestBase.java
index 6c9767d011e..74b2086c842 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/unkeyed/AbstractNonKeyedOperatorRestoreTestBase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/unkeyed/AbstractNonKeyedOperatorRestoreTestBase.java
@@ -47,7 +47,7 @@
 
@Parameterized.Parameters(name = "Migrate Savepoint: {0}")
public static Collection parameters () {
-   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4);
+   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4, MigrationVersion.v1_5);
}
 
protected AbstractNonKeyedOperatorRestoreTestBase(MigrationVersion 
migrationVersion) {
diff --git 
a/flink-tests/src/test/resources/operatorstate/complexKeyed-flink1.5/_metadata 
b/flink-tests/src/test/resources/operatorstate/complexKeyed-flink1.5/_metadata
new file mode 100644
index 000..f606c021692
Binary files /dev/null and 
b/flink-tests/src/test/resources/operatorstate/complexKeyed-flink1.5/_metadata 
differ
diff --git 
a/flink-tests/src/test/resources/operatorstate/nonKeyed-flink1.5/_metadata 
b/flink-tests/src/test/resources/operatorstate/nonKeyed-flink1.5/_metadata
new file mode 100644
index 000..fedfe0b38f9
Binary files /dev/null and 
b/flink-tests/src/test/resources/operatorstate/nonKeyed-flink1.5/_metadata 
differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10092) Update StatefulJobSavepointMigrationITCase

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579389#comment-16579389
 ] 

ASF GitHub Bot commented on FLINK-10092:


zentol closed pull request #6516: [FLINK-10092][tests] Update 
StatefulJobSavepointMigrationITCase for 1.5
URL: https://github.com/apache/flink/pull/6516
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/checkpointing/utils/StatefulJobSavepointMigrationITCase.java
 
b/flink-tests/src/test/java/org/apache/flink/test/checkpointing/utils/StatefulJobSavepointMigrationITCase.java
index c74f304e45a..e8a47b06a28 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/checkpointing/utils/StatefulJobSavepointMigrationITCase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/checkpointing/utils/StatefulJobSavepointMigrationITCase.java
@@ -78,7 +78,9 @@
public static Collection> parameters 
() {
return Arrays.asList(
Tuple2.of(MigrationVersion.v1_4, 
StateBackendLoader.MEMORY_STATE_BACKEND_NAME),
-   Tuple2.of(MigrationVersion.v1_4, 
StateBackendLoader.ROCKSDB_STATE_BACKEND_NAME));
+   Tuple2.of(MigrationVersion.v1_4, 
StateBackendLoader.ROCKSDB_STATE_BACKEND_NAME),
+   Tuple2.of(MigrationVersion.v1_5, 
StateBackendLoader.MEMORY_STATE_BACKEND_NAME),
+   Tuple2.of(MigrationVersion.v1_5, 
StateBackendLoader.ROCKSDB_STATE_BACKEND_NAME));
}
 
private final MigrationVersion testMigrateVersion;
diff --git 
a/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-rocksdb-savepoint/_metadata
 
b/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-rocksdb-savepoint/_metadata
new file mode 100644
index 000..812d581a0a1
Binary files /dev/null and 
b/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-rocksdb-savepoint/_metadata
 differ
diff --git 
a/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-savepoint/_metadata
 
b/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-savepoint/_metadata
new file mode 100644
index 000..f9eb3180fbc
Binary files /dev/null and 
b/flink-tests/src/test/resources/new-stateful-udf-migration-itcase-flink1.5-savepoint/_metadata
 differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Update StatefulJobSavepointMigrationITCase
> --
>
> Key: FLINK-10092
> URL: https://issues.apache.org/jira/browse/FLINK-10092
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10085) Update AbstractOperatorRestoreTestBase

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579390#comment-16579390
 ] 

ASF GitHub Bot commented on FLINK-10085:


zentol closed pull request #6511: [FLINK-10085][tests] Update 
AbstractOperatorRestoreTestBase for 1.5
URL: https://github.com/apache/flink/pull/6511
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/AbstractKeyedOperatorRestoreTestBase.java
 
b/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/AbstractKeyedOperatorRestoreTestBase.java
index 36301dfd77f..0e582694362 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/AbstractKeyedOperatorRestoreTestBase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/AbstractKeyedOperatorRestoreTestBase.java
@@ -41,7 +41,7 @@
 
@Parameterized.Parameters(name = "Migrate Savepoint: {0}")
public static Collection parameters () {
-   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4);
+   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4, MigrationVersion.v1_5);
}
 
public AbstractKeyedOperatorRestoreTestBase(MigrationVersion 
migrationVersion) {
diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/unkeyed/AbstractNonKeyedOperatorRestoreTestBase.java
 
b/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/unkeyed/AbstractNonKeyedOperatorRestoreTestBase.java
index 6c9767d011e..74b2086c842 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/unkeyed/AbstractNonKeyedOperatorRestoreTestBase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/unkeyed/AbstractNonKeyedOperatorRestoreTestBase.java
@@ -47,7 +47,7 @@
 
@Parameterized.Parameters(name = "Migrate Savepoint: {0}")
public static Collection parameters () {
-   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4);
+   return Arrays.asList(MigrationVersion.v1_2, 
MigrationVersion.v1_3, MigrationVersion.v1_4, MigrationVersion.v1_5);
}
 
protected AbstractNonKeyedOperatorRestoreTestBase(MigrationVersion 
migrationVersion) {
diff --git 
a/flink-tests/src/test/resources/operatorstate/complexKeyed-flink1.5/_metadata 
b/flink-tests/src/test/resources/operatorstate/complexKeyed-flink1.5/_metadata
new file mode 100644
index 000..f606c021692
Binary files /dev/null and 
b/flink-tests/src/test/resources/operatorstate/complexKeyed-flink1.5/_metadata 
differ
diff --git 
a/flink-tests/src/test/resources/operatorstate/nonKeyed-flink1.5/_metadata 
b/flink-tests/src/test/resources/operatorstate/nonKeyed-flink1.5/_metadata
new file mode 100644
index 000..fedfe0b38f9
Binary files /dev/null and 
b/flink-tests/src/test/resources/operatorstate/nonKeyed-flink1.5/_metadata 
differ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Update AbstractOperatorRestoreTestBase
> --
>
> Key: FLINK-10085
> URL: https://issues.apache.org/jira/browse/FLINK-10085
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing, Streaming, Tests
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2018-08-14 Thread Xingcan Cui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingcan Cui updated FLINK-6810:
---
Attachment: (was: how to add a scalar function.png)

> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: starter
> Attachments: how to add a scalar function.png
>
>
> In this JIRA, we will create some sub-tasks for adding specific scalar 
> functions such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}}, string-functions {{LPAD}}, etc.
> *How to contribute a built-in scalar function*
> Thank you very much for contributing a built-in function. In order to make 
> sure your contributions are in a good direction, it is recommended to read 
> the following instructions.
>  # Investigate the behavior of the function that you are going to contribute 
> in major DBMSs. This is very important since we have to understand the exact 
> semantics of the function.
>  # It is recommended to add function for both SQL and table-API (Java and 
> Scala).
>  # For every scalar function, add corresponding docs which should include a 
> SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make 
> sure your description of the function is accurate. Please do not simply copy 
> documentation from other projects, especially if the projects are not Apache 
> licensed.
>  # Take overflow, NullPointerException and other exceptions into 
> consideration.
>  # Add unit tests for every new function and its supported APIs. Have a look 
> at {{ScalarFunctionsTest}}, {{SqlExpressionTest}}, 
> {{ScalaFunctionsValidationTest}}, etc. for how to implement function tests.
> !how to add a scalar function.png!
> Welcome anybody to add the sub-task about standard database scalar function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10085) Update AbstractOperatorRestoreTestBase

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10085.

Resolution: Fixed

master: 2dd289f3f3e91b9747908d282e075b8acac32fbe
1.6: 2856ebe3bb2fb106c96ae602f2347b736836bc20

> Update AbstractOperatorRestoreTestBase
> --
>
> Key: FLINK-10085
> URL: https://issues.apache.org/jira/browse/FLINK-10085
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing, Streaming, Tests
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10089) Update FlinkKafkaConsumerBaseMigrationTest

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10089.

Resolution: Fixed

master: 7fb32dce7807e77c68a003986a38bacb8b89bfc1
1.6: 36139f71888f7d17c9da8a5c288adf73b3fb33df

> Update FlinkKafkaConsumerBaseMigrationTest
> --
>
> Key: FLINK-10089
> URL: https://issues.apache.org/jira/browse/FLINK-10089
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kafka Connector
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10087) Update BucketingSinkMigrationTest

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10087.

Resolution: Fixed

master: 4d6915f346e2ec18c37009c842d949f3fafe4252
1.6: 1e4b35ea1a4f299d241d35e3ea570fe5a7c904e0

> Update BucketingSinkMigrationTest
> -
>
> Key: FLINK-10087
> URL: https://issues.apache.org/jira/browse/FLINK-10087
> Project: Flink
>  Issue Type: Sub-task
>  Components: filesystem-connector
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2018-08-14 Thread Xingcan Cui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingcan Cui updated FLINK-6810:
---
Attachment: how to add a scalar function.png

> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: starter
> Attachments: how to add a scalar function.png
>
>
> In this JIRA, we will create some sub-tasks for adding specific scalar 
> functions such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}}, string-functions {{LPAD}}, etc.
> *How to contribute a built-in scalar function*
> Thank you very much for contributing a built-in function. In order to make 
> sure your contributions are in a good direction, it is recommended to read 
> the following instructions.
>  # Investigate the behavior of the function that you are going to contribute 
> in major DBMSs. This is very important since we have to understand the exact 
> semantics of the function.
>  # It is recommended to add function for both SQL and table-API (Java and 
> Scala).
>  # For every scalar function, add corresponding docs which should include a 
> SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make 
> sure your description of the function is accurate. Please do not simply copy 
> documentation from other projects, especially if the projects are not Apache 
> licensed.
>  # Take overflow, NullPointerException and other exceptions into 
> consideration.
>  # Add unit tests for every new function and its supported APIs. Have a look 
> at {{ScalarFunctionsTest}}, {{SqlExpressionTest}}, 
> {{ScalaFunctionsValidationTest}}, etc. for how to implement function tests.
> !how to add a scalar function.png!
> Welcome anybody to add the sub-task about standard database scalar function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10090) Update ContinuousFileProcessingMigrationTest

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10090.

Resolution: Fixed

master: 600423b96d485b962a10682b9c336c35df8fbec9
1.6: f16eb987dd3406b969c84e08c9dac7cc49c65290

> Update ContinuousFileProcessingMigrationTest
> 
>
> Key: FLINK-10090
> URL: https://issues.apache.org/jira/browse/FLINK-10090
> Project: Flink
>  Issue Type: Sub-task
>  Components: filesystem-connector
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2018-08-14 Thread Xingcan Cui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingcan Cui updated FLINK-6810:
---
Description: 
In this JIRA, we will create some sub-tasks for adding specific scalar 
functions such as mathematical-function {{LOG}}, date-functions
 {{DATEADD}}, string-functions {{LPAD}}, etc.

*How to contribute a built-in scalar function*
Thank you very much for contributing a built-in function. In order to make sure 
your contributions are in a good direction, it is recommended to read the 
following instructions.
 # Investigate the behavior of the function that you are going to contribute in 
major DBMSs. This is very important since we have to understand the exact 
semantics of the function.
 # It is recommended to add function for both SQL and table-API (Java and 
Scala).
 # For every scalar function, add corresponding docs which should include a 
SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make sure 
your description of the function is accurate. Please do not simply copy 
documentation from other projects, especially if the projects are not Apache 
licensed.
 # Take overflow, NullPointerException and other exceptions into consideration.
 # Add unit tests for every new function and its supported APIs. Have a look at 
{{ScalarFunctionsTest}}, {{SqlExpressionTest}}, 
{{ScalaFunctionsValidationTest}}, etc. for how to implement function tests.
 !how to add a scalar function.png! 
Welcome anybody to add the sub-task about standard database scalar function.

  was:
In this JIRA, we will create some sub-tasks for adding specific scalar 
functions such as mathematical-function {{LOG}}, date-functions
 {{DATEADD}}, string-functions {{LPAD}}, etc.

*How to contribute a built-in scalar function*
Thank you very much for contributing a built-in function. In order to make sure 
your contributions are in a good direction, it is recommended to read the 
following instructions.
 # Investigate the behavior of the function that you are going to contribute in 
major DBMSs. This is very important since we have to understand the exact 
semantics of the function.
 # It is recommended to add function for both SQL and table-API (Java and 
Scala).
 # For every scalar function, add corresponding docs which should include a 
SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make sure 
your description of the function is accurate. Please do not simply copy 
documentation from other projects, especially if the projects are not Apache 
licensed.
 # Take overflow, NullPointerException and other exceptions into consideration.
 # Add unit tests for every new function and its supported APIs. Have a look at 
{{ScalarFunctionsTest}}, {{SqlExpressionTest}}, 
{{ScalaFunctionsValidationTest}}, etc. for how to implement function tests.

!how to add a scalar function.png!

Welcome anybody to add the sub-task about standard database scalar function.


> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: starter
> Attachments: how to add a scalar function.png
>
>
> In this JIRA, we will create some sub-tasks for adding specific scalar 
> functions such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}}, string-functions {{LPAD}}, etc.
> *How to contribute a built-in scalar function*
> Thank you very much for contributing a built-in function. In order to make 
> sure your contributions are in a good direction, it is recommended to read 
> the following instructions.
>  # Investigate the behavior of the function that you are going to contribute 
> in major DBMSs. This is very important since we have to understand the exact 
> semantics of the function.
>  # It is recommended to add function for both SQL and table-API (Java and 
> Scala).
>  # For every scalar function, add corresponding docs which should include a 
> SQL, a Java and a Scala version in {{./docs/dev/table/functions.md}}. Make 
> sure your description of the function is accurate. Please do not simply copy 
> documentation from other projects, especially if the projects are not Apache 
> licensed.
>  # Take overflow, NullPointerException and other exceptions into 
> consideration.
>  # Add unit tests for every new function and its supported APIs. Have a look 
> at {{ScalarFunctionsTest}}, {{SqlExpressionTest}}, 
> {{ScalaFunctionsValidationTest}}, etc. for how to implement function tests.
>  !how to add a scalar function.png! 
> Welcome anybody to add the sub-task about standard database scalar function.



--
This message was 

[jira] [Closed] (FLINK-10084) Migration tests weren't updated for 1.5

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10084.

Resolution: Fixed

> Migration tests weren't updated for 1.5
> ---
>
> Key: FLINK-10084
> URL: https://issues.apache.org/jira/browse/FLINK-10084
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>
> We have several tests to verify that state taken in previous versions can be 
> properly restored in the current one.
> Most of these tests were not updated to cover the migration from 1.5.
> Below are all the migration tests I could find:
> * AbstractOperatorRestoreTestBase
> * CEPMigrationTest
> * BucketingSinkMigrationTest
> * FlinkKafkaConsumerBaseMigrationTest
> * ContinuousFileProcessingMigrationTest
> * WindowOperatorMigrationTest
> * StatefulJobSavepointMigrationITCase
> * StatefulJobWBroadcastStateMigrationITCase



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10092) Update StatefulJobSavepointMigrationITCase

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10092.

Resolution: Fixed

master: c5f9ea2753c7e673dd887a67c9b770bec2f400cf
1.6: 45a4030dc6c30f939f9094725649dadb5ea3dbdb

> Update StatefulJobSavepointMigrationITCase
> --
>
> Key: FLINK-10092
> URL: https://issues.apache.org/jira/browse/FLINK-10092
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10084) Migration tests weren't updated for 1.5

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-10084:
-
Issue Type: Test  (was: Bug)

> Migration tests weren't updated for 1.5
> ---
>
> Key: FLINK-10084
> URL: https://issues.apache.org/jira/browse/FLINK-10084
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>
> We have several tests to verify that state taken in previous versions can be 
> properly restored in the current one.
> Most of these tests were not updated to cover the migration from 1.5.
> Below are all the migration tests I could find:
> * AbstractOperatorRestoreTestBase
> * CEPMigrationTest
> * BucketingSinkMigrationTest
> * FlinkKafkaConsumerBaseMigrationTest
> * ContinuousFileProcessingMigrationTest
> * WindowOperatorMigrationTest
> * StatefulJobSavepointMigrationITCase
> * StatefulJobWBroadcastStateMigrationITCase



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10091) Update WindowOperatorMigrationTest

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10091.

Resolution: Fixed

master: f9f82efea783b0adb1eb063e434229f21895
1.6: a09be8f75dc43b60f618b1e37662ac67f5fa0135

> Update WindowOperatorMigrationTest
> --
>
> Key: FLINK-10091
> URL: https://issues.apache.org/jira/browse/FLINK-10091
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10139) Update migration tests for 1.6

2018-08-14 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-10139:


 Summary: Update migration tests for 1.6
 Key: FLINK-10139
 URL: https://issues.apache.org/jira/browse/FLINK-10139
 Project: Flink
  Issue Type: Test
  Components: Tests
Affects Versions: 1.7.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.7.0


Similar to FLINK-10084 we have to update the migration tests for 1.6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10135) The JobManager doesn't report the cluster-level metrics

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-10135:
-
Affects Version/s: 1.7.0
   1.6.0

> The JobManager doesn't report the cluster-level metrics
> ---
>
> Key: FLINK-10135
> URL: https://issues.apache.org/jira/browse/FLINK-10135
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, Metrics
>Affects Versions: 1.5.0, 1.6.0, 1.7.0
>Reporter: Joey Echeverria
>Assignee: vinoyang
>Priority: Major
>
> In [the documentation for 
> metrics|https://ci.apache.org/projects/flink/flink-docs-release-1.5/monitoring/metrics.html#cluster]
>  in the Flink 1.5.0 release, it says that the following metrics are reported 
> by the JobManager:
> {noformat}
> numRegisteredTaskManagers
> numRunningJobs
> taskSlotsAvailable
> taskSlotsTotal
> {noformat}
> In the job manager REST endpoint 
> ({{http://:8081/jobmanager/metrics}}), those metrics don't 
> appear.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10135) The JobManager doesn't report the cluster-level metrics

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-10135:
-
Priority: Critical  (was: Major)

> The JobManager doesn't report the cluster-level metrics
> ---
>
> Key: FLINK-10135
> URL: https://issues.apache.org/jira/browse/FLINK-10135
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, Metrics
>Affects Versions: 1.5.0, 1.6.0, 1.7.0
>Reporter: Joey Echeverria
>Assignee: vinoyang
>Priority: Critical
>
> In [the documentation for 
> metrics|https://ci.apache.org/projects/flink/flink-docs-release-1.5/monitoring/metrics.html#cluster]
>  in the Flink 1.5.0 release, it says that the following metrics are reported 
> by the JobManager:
> {noformat}
> numRegisteredTaskManagers
> numRunningJobs
> taskSlotsAvailable
> taskSlotsTotal
> {noformat}
> In the job manager REST endpoint 
> ({{http://:8081/jobmanager/metrics}}), those metrics don't 
> appear.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10135) The JobManager doesn't report the cluster-level metrics

2018-08-14 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-10135:
-
Component/s: Metrics

> The JobManager doesn't report the cluster-level metrics
> ---
>
> Key: FLINK-10135
> URL: https://issues.apache.org/jira/browse/FLINK-10135
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, Metrics
>Affects Versions: 1.5.0, 1.6.0, 1.7.0
>Reporter: Joey Echeverria
>Assignee: vinoyang
>Priority: Critical
>
> In [the documentation for 
> metrics|https://ci.apache.org/projects/flink/flink-docs-release-1.5/monitoring/metrics.html#cluster]
>  in the Flink 1.5.0 release, it says that the following metrics are reported 
> by the JobManager:
> {noformat}
> numRegisteredTaskManagers
> numRunningJobs
> taskSlotsAvailable
> taskSlotsTotal
> {noformat}
> In the job manager REST endpoint 
> ({{http://:8081/jobmanager/metrics}}), those metrics don't 
> appear.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #6468: [FLINK-9900][tests] Include more information on timeout in Zookeeper HA ITCase

2018-08-14 Thread GitBox
zentol commented on a change in pull request #6468: [FLINK-9900][tests] Include 
more information on timeout in Zookeeper HA ITCase
URL: https://github.com/apache/flink/pull/6468#discussion_r209865380
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/ZooKeeperHighAvailabilityITCase.java
 ##
 @@ -256,13 +262,54 @@ public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IO
() -> clusterClient.getJobStatus(jobID),
Time.milliseconds(50),
deadline,
-   (jobStatus) -> jobStatus == JobStatus.FINISHED,
+   JobStatus::isGloballyTerminalState,
TestingUtils.defaultScheduledExecutor());
-   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   try {
+   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   } catch (Throwable e) {
+   // include additional debugging information
+   StringWriter error = new StringWriter();
+   try (PrintWriter out = new PrintWriter(error)) {
+   out.println("The job did not finish in time.");
+   
out.println("allowedInitializeCallsWithoutRestore= " + 
CheckpointBlockingFunction.allowedInitializeCallsWithoutRestore.get());
+   out.println("illegalRestores= " + 
CheckpointBlockingFunction.illegalRestores.get());
+   out.println("successfulRestores= " + 
CheckpointBlockingFunction.successfulRestores.get());
+   out.println("afterMessWithZooKeeper= " + 
CheckpointBlockingFunction.afterMessWithZooKeeper.get());
+   out.println("failedAlready= " + 
CheckpointBlockingFunction.failedAlready.get());
+   out.println("currentJobStatus= " + 
clusterClient.getJobStatus(jobID).get());
+   out.println("numRestarts= " + 
RestartReporter.numRestarts.getValue());
+   out.println("threadDump= " + 
generateThreadDump());
+   }
+   throw new AssertionError(error.toString(), 
ExceptionUtils.stripCompletionException(e));
+   }
 
assertThat("We saw illegal restores.", 
CheckpointBlockingFunction.illegalRestores.get(), is(0));
}
 
+   private static String generateThreadDump() {
+   final StringBuilder dump = new StringBuilder();
+   final ThreadMXBean threadMXBean = 
ManagementFactory.getThreadMXBean();
+   final ThreadInfo[] threadInfos = 
threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
+   for (ThreadInfo threadInfo : threadInfos) {
+   dump.append('"');
+   dump.append(threadInfo.getThreadName());
+   dump.append('"');
+   final Thread.State state = threadInfo.getThreadState();
+   dump.append(System.lineSeparator());
+   dump.append("   java.lang.Thread.State: ");
+   dump.append(state);
+   final StackTraceElement[] stackTraceElements = 
threadInfo.getStackTrace();
+   for (final StackTraceElement stackTraceElement : 
stackTraceElements) {
 
 Review comment:
   but doesn't that just print the exception stack trace, whereas this method 
prints a full thread dump?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9900) Failed to testRestoreBehaviourWithFaultyStateHandles (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579425#comment-16579425
 ] 

ASF GitHub Bot commented on FLINK-9900:
---

zentol commented on a change in pull request #6468: [FLINK-9900][tests] Include 
more information on timeout in Zookeeper HA ITCase
URL: https://github.com/apache/flink/pull/6468#discussion_r209865380
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/ZooKeeperHighAvailabilityITCase.java
 ##
 @@ -256,13 +262,54 @@ public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IO
() -> clusterClient.getJobStatus(jobID),
Time.milliseconds(50),
deadline,
-   (jobStatus) -> jobStatus == JobStatus.FINISHED,
+   JobStatus::isGloballyTerminalState,
TestingUtils.defaultScheduledExecutor());
-   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   try {
+   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   } catch (Throwable e) {
+   // include additional debugging information
+   StringWriter error = new StringWriter();
+   try (PrintWriter out = new PrintWriter(error)) {
+   out.println("The job did not finish in time.");
+   
out.println("allowedInitializeCallsWithoutRestore= " + 
CheckpointBlockingFunction.allowedInitializeCallsWithoutRestore.get());
+   out.println("illegalRestores= " + 
CheckpointBlockingFunction.illegalRestores.get());
+   out.println("successfulRestores= " + 
CheckpointBlockingFunction.successfulRestores.get());
+   out.println("afterMessWithZooKeeper= " + 
CheckpointBlockingFunction.afterMessWithZooKeeper.get());
+   out.println("failedAlready= " + 
CheckpointBlockingFunction.failedAlready.get());
+   out.println("currentJobStatus= " + 
clusterClient.getJobStatus(jobID).get());
+   out.println("numRestarts= " + 
RestartReporter.numRestarts.getValue());
+   out.println("threadDump= " + 
generateThreadDump());
+   }
+   throw new AssertionError(error.toString(), 
ExceptionUtils.stripCompletionException(e));
+   }
 
assertThat("We saw illegal restores.", 
CheckpointBlockingFunction.illegalRestores.get(), is(0));
}
 
+   private static String generateThreadDump() {
+   final StringBuilder dump = new StringBuilder();
+   final ThreadMXBean threadMXBean = 
ManagementFactory.getThreadMXBean();
+   final ThreadInfo[] threadInfos = 
threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
+   for (ThreadInfo threadInfo : threadInfos) {
+   dump.append('"');
+   dump.append(threadInfo.getThreadName());
+   dump.append('"');
+   final Thread.State state = threadInfo.getThreadState();
+   dump.append(System.lineSeparator());
+   dump.append("   java.lang.Thread.State: ");
+   dump.append(state);
+   final StackTraceElement[] stackTraceElements = 
threadInfo.getStackTrace();
+   for (final StackTraceElement stackTraceElement : 
stackTraceElements) {
 
 Review comment:
   but doesn't that just print the exception stack trace, whereas this method 
prints a full thread dump?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Failed to testRestoreBehaviourWithFaultyStateHandles 
> (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase) 
> ---
>
> Key: FLINK-9900
> URL: https://issues.apache.org/jira/browse/FLINK-9900
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.1, 1.6.0
>Reporter: zhangminglei
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.5.3, 1.6.1, 1.7.0
>
>
> https://api.travis-ci.org/v3/job/405843617/log.txt
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 

[GitHub] zentol commented on issue #6538: [hotfix][streaming] Fix and simplify PrintSinkFunctionTest

2018-08-14 Thread GitBox
zentol commented on issue #6538: [hotfix][streaming] Fix and simplify 
PrintSinkFunctionTest
URL: https://github.com/apache/flink/pull/6538#issuecomment-412793747
 
 
   Restarted the build again.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #6503: [FLINK-10072] [docs] Syntax and consistency issues in "The Broadcast State Pattern"

2018-08-14 Thread GitBox
zentol commented on issue #6503: [FLINK-10072] [docs] Syntax and consistency 
issues in "The Broadcast State Pattern"
URL: https://github.com/apache/flink/pull/6503#issuecomment-412794286
 
 
   @yanghua Please don't push empty commits to re-trigger travis or suggest 
this to others. Since this is a pure documentation change you'd just be wasting 
resources.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10072) Syntax and consistency issues in "The Broadcast State Pattern"

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579428#comment-16579428
 ] 

ASF GitHub Bot commented on FLINK-10072:


zentol commented on issue #6503: [FLINK-10072] [docs] Syntax and consistency 
issues in "The Broadcast State Pattern"
URL: https://github.com/apache/flink/pull/6503#issuecomment-412794286
 
 
   @yanghua Please don't push empty commits to re-trigger travis or suggest 
this to others. Since this is a pure documentation change you'd just be wasting 
resources.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Syntax and consistency issues in "The Broadcast State Pattern"
> --
>
> Key: FLINK-10072
> URL: https://issues.apache.org/jira/browse/FLINK-10072
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Streaming
>Affects Versions: 1.5.2
>Reporter: Rick Hofstede
>Priority: Trivial
>  Labels: pull-request-available
>
> There are several issues in the documentation for "[The Broadcast State 
> Pattern|https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/broadcast_state.html]":
>  # Indentation mixes up whitespace and tabs, causing the markdown layout to 
> be crippled (especially related to indentation),
>  # Broken (nested) list layout, causing multi-item lists to be rendered as 
> single-item lists, and
>  # inconsistent list layout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StefanRRichter commented on a change in pull request #6468: [FLINK-9900][tests] Include more information on timeout in Zookeeper HA ITCase

2018-08-14 Thread GitBox
StefanRRichter commented on a change in pull request #6468: [FLINK-9900][tests] 
Include more information on timeout in Zookeeper HA ITCase
URL: https://github.com/apache/flink/pull/6468#discussion_r209866827
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/ZooKeeperHighAvailabilityITCase.java
 ##
 @@ -256,13 +262,54 @@ public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IO
() -> clusterClient.getJobStatus(jobID),
Time.milliseconds(50),
deadline,
-   (jobStatus) -> jobStatus == JobStatus.FINISHED,
+   JobStatus::isGloballyTerminalState,
TestingUtils.defaultScheduledExecutor());
-   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   try {
+   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   } catch (Throwable e) {
+   // include additional debugging information
+   StringWriter error = new StringWriter();
+   try (PrintWriter out = new PrintWriter(error)) {
+   out.println("The job did not finish in time.");
+   
out.println("allowedInitializeCallsWithoutRestore= " + 
CheckpointBlockingFunction.allowedInitializeCallsWithoutRestore.get());
+   out.println("illegalRestores= " + 
CheckpointBlockingFunction.illegalRestores.get());
+   out.println("successfulRestores= " + 
CheckpointBlockingFunction.successfulRestores.get());
+   out.println("afterMessWithZooKeeper= " + 
CheckpointBlockingFunction.afterMessWithZooKeeper.get());
+   out.println("failedAlready= " + 
CheckpointBlockingFunction.failedAlready.get());
+   out.println("currentJobStatus= " + 
clusterClient.getJobStatus(jobID).get());
+   out.println("numRestarts= " + 
RestartReporter.numRestarts.getValue());
+   out.println("threadDump= " + 
generateThreadDump());
+   }
+   throw new AssertionError(error.toString(), 
ExceptionUtils.stripCompletionException(e));
+   }
 
assertThat("We saw illegal restores.", 
CheckpointBlockingFunction.illegalRestores.get(), is(0));
}
 
+   private static String generateThreadDump() {
+   final StringBuilder dump = new StringBuilder();
+   final ThreadMXBean threadMXBean = 
ManagementFactory.getThreadMXBean();
+   final ThreadInfo[] threadInfos = 
threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
+   for (ThreadInfo threadInfo : threadInfos) {
+   dump.append('"');
+   dump.append(threadInfo.getThreadName());
+   dump.append('"');
+   final Thread.State state = threadInfo.getThreadState();
+   dump.append(System.lineSeparator());
+   dump.append("   java.lang.Thread.State: ");
+   dump.append(state);
+   final StackTraceElement[] stackTraceElements = 
threadInfo.getStackTrace();
+   for (final StackTraceElement stackTraceElement : 
stackTraceElements) {
 
 Review comment:
   Sure, but you could use it in the loop that iterates all thread info objects 
to create the trace string per thread, which you can set via the 
`Throwable.setStackTrace` before printing?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9900) Failed to testRestoreBehaviourWithFaultyStateHandles (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579432#comment-16579432
 ] 

ASF GitHub Bot commented on FLINK-9900:
---

StefanRRichter commented on a change in pull request #6468: [FLINK-9900][tests] 
Include more information on timeout in Zookeeper HA ITCase
URL: https://github.com/apache/flink/pull/6468#discussion_r209866827
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/ZooKeeperHighAvailabilityITCase.java
 ##
 @@ -256,13 +262,54 @@ public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IO
() -> clusterClient.getJobStatus(jobID),
Time.milliseconds(50),
deadline,
-   (jobStatus) -> jobStatus == JobStatus.FINISHED,
+   JobStatus::isGloballyTerminalState,
TestingUtils.defaultScheduledExecutor());
-   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   try {
+   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   } catch (Throwable e) {
+   // include additional debugging information
+   StringWriter error = new StringWriter();
+   try (PrintWriter out = new PrintWriter(error)) {
+   out.println("The job did not finish in time.");
+   
out.println("allowedInitializeCallsWithoutRestore= " + 
CheckpointBlockingFunction.allowedInitializeCallsWithoutRestore.get());
+   out.println("illegalRestores= " + 
CheckpointBlockingFunction.illegalRestores.get());
+   out.println("successfulRestores= " + 
CheckpointBlockingFunction.successfulRestores.get());
+   out.println("afterMessWithZooKeeper= " + 
CheckpointBlockingFunction.afterMessWithZooKeeper.get());
+   out.println("failedAlready= " + 
CheckpointBlockingFunction.failedAlready.get());
+   out.println("currentJobStatus= " + 
clusterClient.getJobStatus(jobID).get());
+   out.println("numRestarts= " + 
RestartReporter.numRestarts.getValue());
+   out.println("threadDump= " + 
generateThreadDump());
+   }
+   throw new AssertionError(error.toString(), 
ExceptionUtils.stripCompletionException(e));
+   }
 
assertThat("We saw illegal restores.", 
CheckpointBlockingFunction.illegalRestores.get(), is(0));
}
 
+   private static String generateThreadDump() {
+   final StringBuilder dump = new StringBuilder();
+   final ThreadMXBean threadMXBean = 
ManagementFactory.getThreadMXBean();
+   final ThreadInfo[] threadInfos = 
threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
+   for (ThreadInfo threadInfo : threadInfos) {
+   dump.append('"');
+   dump.append(threadInfo.getThreadName());
+   dump.append('"');
+   final Thread.State state = threadInfo.getThreadState();
+   dump.append(System.lineSeparator());
+   dump.append("   java.lang.Thread.State: ");
+   dump.append(state);
+   final StackTraceElement[] stackTraceElements = 
threadInfo.getStackTrace();
+   for (final StackTraceElement stackTraceElement : 
stackTraceElements) {
 
 Review comment:
   Sure, but you could use it in the loop that iterates all thread info objects 
to create the trace string per thread, which you can set via the 
`Throwable.setStackTrace` before printing?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Failed to testRestoreBehaviourWithFaultyStateHandles 
> (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase) 
> ---
>
> Key: FLINK-9900
> URL: https://issues.apache.org/jira/browse/FLINK-9900
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.1, 1.6.0
>Reporter: zhangminglei
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.5.3, 1.6.1, 1.7.0
>
>
> https://api.trav

[GitHub] zentol commented on a change in pull request #6468: [FLINK-9900][tests] Include more information on timeout in Zookeeper HA ITCase

2018-08-14 Thread GitBox
zentol commented on a change in pull request #6468: [FLINK-9900][tests] Include 
more information on timeout in Zookeeper HA ITCase
URL: https://github.com/apache/flink/pull/6468#discussion_r209867134
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/ZooKeeperHighAvailabilityITCase.java
 ##
 @@ -256,13 +262,54 @@ public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IO
() -> clusterClient.getJobStatus(jobID),
Time.milliseconds(50),
deadline,
-   (jobStatus) -> jobStatus == JobStatus.FINISHED,
+   JobStatus::isGloballyTerminalState,
TestingUtils.defaultScheduledExecutor());
-   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   try {
+   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   } catch (Throwable e) {
+   // include additional debugging information
+   StringWriter error = new StringWriter();
+   try (PrintWriter out = new PrintWriter(error)) {
+   out.println("The job did not finish in time.");
+   
out.println("allowedInitializeCallsWithoutRestore= " + 
CheckpointBlockingFunction.allowedInitializeCallsWithoutRestore.get());
+   out.println("illegalRestores= " + 
CheckpointBlockingFunction.illegalRestores.get());
+   out.println("successfulRestores= " + 
CheckpointBlockingFunction.successfulRestores.get());
+   out.println("afterMessWithZooKeeper= " + 
CheckpointBlockingFunction.afterMessWithZooKeeper.get());
+   out.println("failedAlready= " + 
CheckpointBlockingFunction.failedAlready.get());
+   out.println("currentJobStatus= " + 
clusterClient.getJobStatus(jobID).get());
+   out.println("numRestarts= " + 
RestartReporter.numRestarts.getValue());
+   out.println("threadDump= " + 
generateThreadDump());
+   }
+   throw new AssertionError(error.toString(), 
ExceptionUtils.stripCompletionException(e));
+   }
 
assertThat("We saw illegal restores.", 
CheckpointBlockingFunction.illegalRestores.get(), is(0));
}
 
+   private static String generateThreadDump() {
+   final StringBuilder dump = new StringBuilder();
+   final ThreadMXBean threadMXBean = 
ManagementFactory.getThreadMXBean();
+   final ThreadInfo[] threadInfos = 
threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
+   for (ThreadInfo threadInfo : threadInfos) {
+   dump.append('"');
+   dump.append(threadInfo.getThreadName());
+   dump.append('"');
+   final Thread.State state = threadInfo.getThreadState();
+   dump.append(System.lineSeparator());
+   dump.append("   java.lang.Thread.State: ");
+   dump.append(state);
+   final StackTraceElement[] stackTraceElements = 
threadInfo.getStackTrace();
+   for (final StackTraceElement stackTraceElement : 
stackTraceElements) {
 
 Review comment:
   ah ok, that makes sense.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9900) Failed to testRestoreBehaviourWithFaultyStateHandles (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579434#comment-16579434
 ] 

ASF GitHub Bot commented on FLINK-9900:
---

zentol commented on a change in pull request #6468: [FLINK-9900][tests] Include 
more information on timeout in Zookeeper HA ITCase
URL: https://github.com/apache/flink/pull/6468#discussion_r209867134
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/ZooKeeperHighAvailabilityITCase.java
 ##
 @@ -256,13 +262,54 @@ public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IO
() -> clusterClient.getJobStatus(jobID),
Time.milliseconds(50),
deadline,
-   (jobStatus) -> jobStatus == JobStatus.FINISHED,
+   JobStatus::isGloballyTerminalState,
TestingUtils.defaultScheduledExecutor());
-   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   try {
+   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   } catch (Throwable e) {
+   // include additional debugging information
+   StringWriter error = new StringWriter();
+   try (PrintWriter out = new PrintWriter(error)) {
+   out.println("The job did not finish in time.");
+   
out.println("allowedInitializeCallsWithoutRestore= " + 
CheckpointBlockingFunction.allowedInitializeCallsWithoutRestore.get());
+   out.println("illegalRestores= " + 
CheckpointBlockingFunction.illegalRestores.get());
+   out.println("successfulRestores= " + 
CheckpointBlockingFunction.successfulRestores.get());
+   out.println("afterMessWithZooKeeper= " + 
CheckpointBlockingFunction.afterMessWithZooKeeper.get());
+   out.println("failedAlready= " + 
CheckpointBlockingFunction.failedAlready.get());
+   out.println("currentJobStatus= " + 
clusterClient.getJobStatus(jobID).get());
+   out.println("numRestarts= " + 
RestartReporter.numRestarts.getValue());
+   out.println("threadDump= " + 
generateThreadDump());
+   }
+   throw new AssertionError(error.toString(), 
ExceptionUtils.stripCompletionException(e));
+   }
 
assertThat("We saw illegal restores.", 
CheckpointBlockingFunction.illegalRestores.get(), is(0));
}
 
+   private static String generateThreadDump() {
+   final StringBuilder dump = new StringBuilder();
+   final ThreadMXBean threadMXBean = 
ManagementFactory.getThreadMXBean();
+   final ThreadInfo[] threadInfos = 
threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
+   for (ThreadInfo threadInfo : threadInfos) {
+   dump.append('"');
+   dump.append(threadInfo.getThreadName());
+   dump.append('"');
+   final Thread.State state = threadInfo.getThreadState();
+   dump.append(System.lineSeparator());
+   dump.append("   java.lang.Thread.State: ");
+   dump.append(state);
+   final StackTraceElement[] stackTraceElements = 
threadInfo.getStackTrace();
+   for (final StackTraceElement stackTraceElement : 
stackTraceElements) {
 
 Review comment:
   ah ok, that makes sense.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Failed to testRestoreBehaviourWithFaultyStateHandles 
> (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase) 
> ---
>
> Key: FLINK-9900
> URL: https://issues.apache.org/jira/browse/FLINK-9900
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.1, 1.6.0
>Reporter: zhangminglei
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.5.3, 1.6.1, 1.7.0
>
>
> https://api.travis-ci.org/v3/job/405843617/log.txt
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 124.598 sec 
> <<< FAILURE! - in 
> org.apache.flink.test.checkpointi

[jira] [Created] (FLINK-10140) Log files are not available via web UI in containerized environment

2018-08-14 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-10140:
-

 Summary: Log files are not available via web UI in containerized 
environment
 Key: FLINK-10140
 URL: https://issues.apache.org/jira/browse/FLINK-10140
 Project: Flink
  Issue Type: Bug
  Components: Docker, Kubernetes
Affects Versions: 1.6.0, 1.5.2, 1.7.0
Reporter: Till Rohrmann
 Fix For: 1.7.0


Since we start Flink components in the foreground (see 
{{flink-contrib/docker-flink/docker-entrypoint.sh}} and 
{{flink-container/docker/docker-entrypoint.sh}} we print the log statements to 
STDOUT and don't write them into a file. Consequently, the web UI cannot server 
the log files since they don't exist. 

A simple way to fix the problem is to also create a log file like in daemon mode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #6503: [FLINK-10072] [docs] Syntax and consistency issues in "The Broadcast State Pattern"

2018-08-14 Thread GitBox
zentol commented on a change in pull request #6503: [FLINK-10072] [docs] Syntax 
and consistency issues in "The Broadcast State Pattern"
URL: https://github.com/apache/flink/pull/6503#discussion_r209870859
 
 

 ##
 File path: docs/dev/stream/state/broadcast_state.md
 ##
 @@ -171,9 +171,8 @@ exposes some functionality which is not available to the 
`BroadcastProcessFuncti
   to register event and/or processing time timers. When a timer fires, the 
`onTimer()` (shown above) is invoked with an 
   `OnTimerContext` which exposes the same functionality as the 
`ReadOnlyContext` plus 
- the ability to ask if the timer that fired was an event or processing 
time one and 
-   - to query the key associated with the timer.
-
-  This is aligned with the `onTimer()` method of the `KeyedProcessFunction`. 
+   - to query the key associated with the timer.
 
 Review comment:
   This part still doesn't look correct.
   
![untitled](https://user-images.githubusercontent.com/5725237/44080974-dce6f1e0-9fad-11e8-8ea1-ae98c5254b19.png)
   
   It's now hard to tell whether `This is aligned with the onTimer() method of 
the KeyedProcessFunction.` applies to `the ReadOnlyContext in the 
processElement() method ...` or either/both of the sub-items.
   
   We should just re-phrase this part. Actually I would just remove it; as 
someone not deeply familiar with process functions I simply assumed that 
`KeyedBroadcastProcessFunction#onTimer` works exactly like 
`KeyedProcessFunction#onTimer`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10072) Syntax and consistency issues in "The Broadcast State Pattern"

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579447#comment-16579447
 ] 

ASF GitHub Bot commented on FLINK-10072:


zentol commented on a change in pull request #6503: [FLINK-10072] [docs] Syntax 
and consistency issues in "The Broadcast State Pattern"
URL: https://github.com/apache/flink/pull/6503#discussion_r209870859
 
 

 ##
 File path: docs/dev/stream/state/broadcast_state.md
 ##
 @@ -171,9 +171,8 @@ exposes some functionality which is not available to the 
`BroadcastProcessFuncti
   to register event and/or processing time timers. When a timer fires, the 
`onTimer()` (shown above) is invoked with an 
   `OnTimerContext` which exposes the same functionality as the 
`ReadOnlyContext` plus 
- the ability to ask if the timer that fired was an event or processing 
time one and 
-   - to query the key associated with the timer.
-
-  This is aligned with the `onTimer()` method of the `KeyedProcessFunction`. 
+   - to query the key associated with the timer.
 
 Review comment:
   This part still doesn't look correct.
   
![untitled](https://user-images.githubusercontent.com/5725237/44080974-dce6f1e0-9fad-11e8-8ea1-ae98c5254b19.png)
   
   It's now hard to tell whether `This is aligned with the onTimer() method of 
the KeyedProcessFunction.` applies to `the ReadOnlyContext in the 
processElement() method ...` or either/both of the sub-items.
   
   We should just re-phrase this part. Actually I would just remove it; as 
someone not deeply familiar with process functions I simply assumed that 
`KeyedBroadcastProcessFunction#onTimer` works exactly like 
`KeyedProcessFunction#onTimer`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Syntax and consistency issues in "The Broadcast State Pattern"
> --
>
> Key: FLINK-10072
> URL: https://issues.apache.org/jira/browse/FLINK-10072
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Streaming
>Affects Versions: 1.5.2
>Reporter: Rick Hofstede
>Priority: Trivial
>  Labels: pull-request-available
>
> There are several issues in the documentation for "[The Broadcast State 
> Pattern|https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/broadcast_state.html]":
>  # Indentation mixes up whitespace and tabs, causing the markdown layout to 
> be crippled (especially related to indentation),
>  # Broken (nested) list layout, causing multi-item lists to be rendered as 
> single-item lists, and
>  # inconsistent list layout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-3875) Add a TableSink for Elasticsearch

2018-08-14 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-3875:
---

Assignee: Timo Walther  (was: Suneel Marthi)

> Add a TableSink for Elasticsearch
> -
>
> Key: FLINK-3875
> URL: https://issues.apache.org/jira/browse/FLINK-3875
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors, Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Major
>
> Add a TableSink that writes data to Elasticsearch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #6468: [FLINK-9900][tests] Include more information on timeout in Zookeeper HA ITCase

2018-08-14 Thread GitBox
zentol commented on a change in pull request #6468: [FLINK-9900][tests] Include 
more information on timeout in Zookeeper HA ITCase
URL: https://github.com/apache/flink/pull/6468#discussion_r209879523
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/ZooKeeperHighAvailabilityITCase.java
 ##
 @@ -256,13 +262,54 @@ public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IO
() -> clusterClient.getJobStatus(jobID),
Time.milliseconds(50),
deadline,
-   (jobStatus) -> jobStatus == JobStatus.FINISHED,
+   JobStatus::isGloballyTerminalState,
TestingUtils.defaultScheduledExecutor());
-   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   try {
+   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   } catch (Throwable e) {
+   // include additional debugging information
+   StringWriter error = new StringWriter();
+   try (PrintWriter out = new PrintWriter(error)) {
+   out.println("The job did not finish in time.");
+   
out.println("allowedInitializeCallsWithoutRestore= " + 
CheckpointBlockingFunction.allowedInitializeCallsWithoutRestore.get());
+   out.println("illegalRestores= " + 
CheckpointBlockingFunction.illegalRestores.get());
+   out.println("successfulRestores= " + 
CheckpointBlockingFunction.successfulRestores.get());
+   out.println("afterMessWithZooKeeper= " + 
CheckpointBlockingFunction.afterMessWithZooKeeper.get());
+   out.println("failedAlready= " + 
CheckpointBlockingFunction.failedAlready.get());
+   out.println("currentJobStatus= " + 
clusterClient.getJobStatus(jobID).get());
+   out.println("numRestarts= " + 
RestartReporter.numRestarts.getValue());
+   out.println("threadDump= " + 
generateThreadDump());
+   }
+   throw new AssertionError(error.toString(), 
ExceptionUtils.stripCompletionException(e));
+   }
 
assertThat("We saw illegal restores.", 
CheckpointBlockingFunction.illegalRestores.get(), is(0));
}
 
+   private static String generateThreadDump() {
+   final StringBuilder dump = new StringBuilder();
+   final ThreadMXBean threadMXBean = 
ManagementFactory.getThreadMXBean();
+   final ThreadInfo[] threadInfos = 
threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
+   for (ThreadInfo threadInfo : threadInfos) {
+   dump.append('"');
+   dump.append(threadInfo.getThreadName());
+   dump.append('"');
+   final Thread.State state = threadInfo.getThreadState();
+   dump.append(System.lineSeparator());
+   dump.append("   java.lang.Thread.State: ");
+   dump.append(state);
+   final StackTraceElement[] stackTraceElements = 
threadInfo.getStackTrace();
+   for (final StackTraceElement stackTraceElement : 
stackTraceElements) {
 
 Review comment:
   The output is nicer if we go with the original option:
   without Throwable (i.e. current PR revision):
   ```
   "Monitor Ctrl-Break"
  java.lang.Thread.State: RUNNABLE
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
   at java.net.SocketInputStream.read(SocketInputStream.java:170)
   at java.net.SocketInputStream.read(SocketInputStream.java:141)
   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
   at java.io.InputStreamReader.read(InputStreamReader.java:184)
   at java.io.BufferedReader.fill(BufferedReader.java:161)
   at java.io.BufferedReader.readLine(BufferedReader.java:324)
   at java.io.BufferedReader.readLine(BufferedReader.java:389)
   at 
com.intellij.rt.execution.application.AppMainV2$1.run(AppMainV2.java:64)
   ```
   with throwable:
   ```
   "Monitor Ctrl-Break"
  java.lang.Thread.State: RUNNABLE
   java.lang.Throwable
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.ni

[jira] [Commented] (FLINK-9900) Failed to testRestoreBehaviourWithFaultyStateHandles (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579487#comment-16579487
 ] 

ASF GitHub Bot commented on FLINK-9900:
---

zentol commented on a change in pull request #6468: [FLINK-9900][tests] Include 
more information on timeout in Zookeeper HA ITCase
URL: https://github.com/apache/flink/pull/6468#discussion_r209879523
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/ZooKeeperHighAvailabilityITCase.java
 ##
 @@ -256,13 +262,54 @@ public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IO
() -> clusterClient.getJobStatus(jobID),
Time.milliseconds(50),
deadline,
-   (jobStatus) -> jobStatus == JobStatus.FINISHED,
+   JobStatus::isGloballyTerminalState,
TestingUtils.defaultScheduledExecutor());
-   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   try {
+   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   } catch (Throwable e) {
+   // include additional debugging information
+   StringWriter error = new StringWriter();
+   try (PrintWriter out = new PrintWriter(error)) {
+   out.println("The job did not finish in time.");
+   
out.println("allowedInitializeCallsWithoutRestore= " + 
CheckpointBlockingFunction.allowedInitializeCallsWithoutRestore.get());
+   out.println("illegalRestores= " + 
CheckpointBlockingFunction.illegalRestores.get());
+   out.println("successfulRestores= " + 
CheckpointBlockingFunction.successfulRestores.get());
+   out.println("afterMessWithZooKeeper= " + 
CheckpointBlockingFunction.afterMessWithZooKeeper.get());
+   out.println("failedAlready= " + 
CheckpointBlockingFunction.failedAlready.get());
+   out.println("currentJobStatus= " + 
clusterClient.getJobStatus(jobID).get());
+   out.println("numRestarts= " + 
RestartReporter.numRestarts.getValue());
+   out.println("threadDump= " + 
generateThreadDump());
+   }
+   throw new AssertionError(error.toString(), 
ExceptionUtils.stripCompletionException(e));
+   }
 
assertThat("We saw illegal restores.", 
CheckpointBlockingFunction.illegalRestores.get(), is(0));
}
 
+   private static String generateThreadDump() {
+   final StringBuilder dump = new StringBuilder();
+   final ThreadMXBean threadMXBean = 
ManagementFactory.getThreadMXBean();
+   final ThreadInfo[] threadInfos = 
threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
+   for (ThreadInfo threadInfo : threadInfos) {
+   dump.append('"');
+   dump.append(threadInfo.getThreadName());
+   dump.append('"');
+   final Thread.State state = threadInfo.getThreadState();
+   dump.append(System.lineSeparator());
+   dump.append("   java.lang.Thread.State: ");
+   dump.append(state);
+   final StackTraceElement[] stackTraceElements = 
threadInfo.getStackTrace();
+   for (final StackTraceElement stackTraceElement : 
stackTraceElements) {
 
 Review comment:
   The output is nicer if we go with the original option:
   without Throwable (i.e. current PR revision):
   ```
   "Monitor Ctrl-Break"
  java.lang.Thread.State: RUNNABLE
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
   at java.net.SocketInputStream.read(SocketInputStream.java:170)
   at java.net.SocketInputStream.read(SocketInputStream.java:141)
   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
   at java.io.InputStreamReader.read(InputStreamReader.java:184)
   at java.io.BufferedReader.fill(BufferedReader.java:161)
   at java.io.BufferedReader.readLine(BufferedReader.java:324)
   at java.io.BufferedReader.readLine(BufferedReader.java:389)
   at 
com.intellij.rt.execution.application.AppMainV2$1.run(AppMainV2.java:64)
   ```
   with throwable:
   ```
   "Monitor Ctrl-Break"
  java.lang.Thread.State: RUNNABLE
   java.lang.Throwable
at java.net.SocketInputStream.socket

[jira] [Commented] (FLINK-10119) JsonRowDeserializationSchema deserialize kafka message

2018-08-14 Thread sean.miao (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579489#comment-16579489
 ] 

sean.miao commented on FLINK-10119:
---

I mean that if there is a record whose format is not json  
JsonRowDeserializationSchema#deserialize will threw IOException and job will 
failed。A json object with a field missing will be assigned a null value for 
that field by call setFailOnMissingField(true) 。

> JsonRowDeserializationSchema deserialize kafka message
> --
>
> Key: FLINK-10119
> URL: https://issues.apache.org/jira/browse/FLINK-10119
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.5.1
> Environment: 无
>Reporter: sean.miao
>Assignee: buptljy
>Priority: Major
>
> Recently, we are using Kafka010JsonTableSource to process kafka's json 
> messages.We turned on checkpoint and auto-restart strategy .
> We found that as long as the format of a message is not json, it will cause 
> the job to not be pulled up. Of course, this is to ensure that only once 
> processing or at least once processing, but the resulting application is not 
> available and has a greater impact on us.
> the code is :
> class : JsonRowDeserializationSchema
> function :
> @Override
>  public Row deserialize(byte[] message) throws IOException {
>  try
> { final JsonNode root = objectMapper.readTree(message); return 
> convertRow(root, (RowTypeInfo) typeInfo); }
> catch (Throwable t)
> { throw new IOException("Failed to deserialize JSON object.", t); }
> }
> now ,i change it to  :
> public Row deserialize(byte[] message) throws IOException {
>  try
> { JsonNode root = this.objectMapper.readTree(message); return 
> this.convertRow(root, (RowTypeInfo)this.typeInfo); }
> catch (Throwable var4) {
>  message = this.objectMapper.writeValueAsBytes("{}");
>  JsonNode root = this.objectMapper.readTree(message);
>  return this.convertRow(root, (RowTypeInfo)this.typeInfo);
>  }
>  }
>  
> I think that data format errors are inevitable during network transmission, 
> so can we add a new column to the table for the wrong data format? like spark 
> sql does。
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] twalthr commented on a change in pull request #6359: [FLINK-6895][table]Add STR_TO_DATE supported in SQL

2018-08-14 Thread GitBox
twalthr commented on a change in pull request #6359: [FLINK-6895][table]Add 
STR_TO_DATE supported in SQL
URL: https://github.com/apache/flink/pull/6359#discussion_r209881790
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/StrToDateCallGen.scala
 ##
 @@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.codegen.calls
+
+import org.apache.flink.api.common.typeinfo.SqlTimeTypeInfo
+import 
org.apache.flink.table.codegen.calls.CallGenerator.generateCallIfArgsNotNull
+import org.apache.flink.table.codegen.{CodeGenerator, GeneratedExpression}
+import org.apache.flink.table.runtime.functions.DateTimeFunctions
+
+/**
+  * Generates StrToDate call based on the type of last operand.
+  */
+class StrToDateCallGen extends CallGenerator {
+  override def generate(codeGenerator: CodeGenerator,
+operands: Seq[GeneratedExpression]): 
GeneratedExpression = {
+if (operands.last.literal) {
 
 Review comment:
   In the CodeGenerator line 744 you can see that we calculate the result type 
from Calcite:
   ```
   val resultType = FlinkTypeFactory.toTypeInfo(call.getType)
   ```
   
   We cannot simply return our own type for functions, we always need to stay 
in sync with Calcite. For following operators such as:
   
   ```
   1. DataStreamCalc(field: DATE) # if we would return TIMESTAMP here, the 
following operation would still assume DATE, thus we get a type mismatch.
   2. DataStreamCalc((field + 1): DATE)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StefanRRichter commented on a change in pull request #6468: [FLINK-9900][tests] Include more information on timeout in Zookeeper HA ITCase

2018-08-14 Thread GitBox
StefanRRichter commented on a change in pull request #6468: [FLINK-9900][tests] 
Include more information on timeout in Zookeeper HA ITCase
URL: https://github.com/apache/flink/pull/6468#discussion_r209881971
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/ZooKeeperHighAvailabilityITCase.java
 ##
 @@ -256,13 +262,54 @@ public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IO
() -> clusterClient.getJobStatus(jobID),
Time.milliseconds(50),
deadline,
-   (jobStatus) -> jobStatus == JobStatus.FINISHED,
+   JobStatus::isGloballyTerminalState,
TestingUtils.defaultScheduledExecutor());
-   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   try {
+   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   } catch (Throwable e) {
+   // include additional debugging information
+   StringWriter error = new StringWriter();
+   try (PrintWriter out = new PrintWriter(error)) {
+   out.println("The job did not finish in time.");
+   
out.println("allowedInitializeCallsWithoutRestore= " + 
CheckpointBlockingFunction.allowedInitializeCallsWithoutRestore.get());
+   out.println("illegalRestores= " + 
CheckpointBlockingFunction.illegalRestores.get());
+   out.println("successfulRestores= " + 
CheckpointBlockingFunction.successfulRestores.get());
+   out.println("afterMessWithZooKeeper= " + 
CheckpointBlockingFunction.afterMessWithZooKeeper.get());
+   out.println("failedAlready= " + 
CheckpointBlockingFunction.failedAlready.get());
+   out.println("currentJobStatus= " + 
clusterClient.getJobStatus(jobID).get());
+   out.println("numRestarts= " + 
RestartReporter.numRestarts.getValue());
+   out.println("threadDump= " + 
generateThreadDump());
+   }
+   throw new AssertionError(error.toString(), 
ExceptionUtils.stripCompletionException(e));
+   }
 
assertThat("We saw illegal restores.", 
CheckpointBlockingFunction.illegalRestores.get(), is(0));
}
 
+   private static String generateThreadDump() {
+   final StringBuilder dump = new StringBuilder();
+   final ThreadMXBean threadMXBean = 
ManagementFactory.getThreadMXBean();
+   final ThreadInfo[] threadInfos = 
threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
+   for (ThreadInfo threadInfo : threadInfos) {
+   dump.append('"');
+   dump.append(threadInfo.getThreadName());
+   dump.append('"');
+   final Thread.State state = threadInfo.getThreadState();
+   dump.append(System.lineSeparator());
+   dump.append("   java.lang.Thread.State: ");
+   dump.append(state);
+   final StackTraceElement[] stackTraceElements = 
threadInfo.getStackTrace();
+   for (final StackTraceElement stackTraceElement : 
stackTraceElements) {
 
 Review comment:
   Fine with me, especially when we consider that the code is probably not 
permanently there.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-6895) Add STR_TO_DATE supported in SQL

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579491#comment-16579491
 ] 

ASF GitHub Bot commented on FLINK-6895:
---

twalthr commented on a change in pull request #6359: [FLINK-6895][table]Add 
STR_TO_DATE supported in SQL
URL: https://github.com/apache/flink/pull/6359#discussion_r209881790
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/StrToDateCallGen.scala
 ##
 @@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.codegen.calls
+
+import org.apache.flink.api.common.typeinfo.SqlTimeTypeInfo
+import 
org.apache.flink.table.codegen.calls.CallGenerator.generateCallIfArgsNotNull
+import org.apache.flink.table.codegen.{CodeGenerator, GeneratedExpression}
+import org.apache.flink.table.runtime.functions.DateTimeFunctions
+
+/**
+  * Generates StrToDate call based on the type of last operand.
+  */
+class StrToDateCallGen extends CallGenerator {
+  override def generate(codeGenerator: CodeGenerator,
+operands: Seq[GeneratedExpression]): 
GeneratedExpression = {
+if (operands.last.literal) {
 
 Review comment:
   In the CodeGenerator line 744 you can see that we calculate the result type 
from Calcite:
   ```
   val resultType = FlinkTypeFactory.toTypeInfo(call.getType)
   ```
   
   We cannot simply return our own type for functions, we always need to stay 
in sync with Calcite. For following operators such as:
   
   ```
   1. DataStreamCalc(field: DATE) # if we would return TIMESTAMP here, the 
following operation would still assume DATE, thus we get a type mismatch.
   2. DataStreamCalc((field + 1): DATE)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add STR_TO_DATE supported in SQL
> 
>
> Key: FLINK-6895
> URL: https://issues.apache.org/jira/browse/FLINK-6895
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>  Labels: pull-request-available
>
> STR_TO_DATE(str,format) This is the inverse of the DATE_FORMAT() function. It 
> takes a string str and a format string format. STR_TO_DATE() returns a 
> DATETIME value if the format string contains both date and time parts, or a 
> DATE or TIME value if the string contains only date or time parts. If the 
> date, time, or datetime value extracted from str is illegal, STR_TO_DATE() 
> returns NULL and produces a warning.
> * Syntax:
> STR_TO_DATE(str,format) 
> * Arguments
> **str: -
> **format: -
> * Return Types
>   DATAETIME/DATE/TIME
> * Example:
>   STR_TO_DATE('01,5,2013','%d,%m,%Y') -> '2013-05-01'
>   SELECT STR_TO_DATE('a09:30:17','a%h:%i:%s') -> '09:30:17'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_str-to-date]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9900) Failed to testRestoreBehaviourWithFaultyStateHandles (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579495#comment-16579495
 ] 

ASF GitHub Bot commented on FLINK-9900:
---

StefanRRichter commented on a change in pull request #6468: [FLINK-9900][tests] 
Include more information on timeout in Zookeeper HA ITCase
URL: https://github.com/apache/flink/pull/6468#discussion_r209881971
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/ZooKeeperHighAvailabilityITCase.java
 ##
 @@ -256,13 +262,54 @@ public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IO
() -> clusterClient.getJobStatus(jobID),
Time.milliseconds(50),
deadline,
-   (jobStatus) -> jobStatus == JobStatus.FINISHED,
+   JobStatus::isGloballyTerminalState,
TestingUtils.defaultScheduledExecutor());
-   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   try {
+   assertEquals(JobStatus.FINISHED, jobStatusFuture.get());
+   } catch (Throwable e) {
+   // include additional debugging information
+   StringWriter error = new StringWriter();
+   try (PrintWriter out = new PrintWriter(error)) {
+   out.println("The job did not finish in time.");
+   
out.println("allowedInitializeCallsWithoutRestore= " + 
CheckpointBlockingFunction.allowedInitializeCallsWithoutRestore.get());
+   out.println("illegalRestores= " + 
CheckpointBlockingFunction.illegalRestores.get());
+   out.println("successfulRestores= " + 
CheckpointBlockingFunction.successfulRestores.get());
+   out.println("afterMessWithZooKeeper= " + 
CheckpointBlockingFunction.afterMessWithZooKeeper.get());
+   out.println("failedAlready= " + 
CheckpointBlockingFunction.failedAlready.get());
+   out.println("currentJobStatus= " + 
clusterClient.getJobStatus(jobID).get());
+   out.println("numRestarts= " + 
RestartReporter.numRestarts.getValue());
+   out.println("threadDump= " + 
generateThreadDump());
+   }
+   throw new AssertionError(error.toString(), 
ExceptionUtils.stripCompletionException(e));
+   }
 
assertThat("We saw illegal restores.", 
CheckpointBlockingFunction.illegalRestores.get(), is(0));
}
 
+   private static String generateThreadDump() {
+   final StringBuilder dump = new StringBuilder();
+   final ThreadMXBean threadMXBean = 
ManagementFactory.getThreadMXBean();
+   final ThreadInfo[] threadInfos = 
threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
+   for (ThreadInfo threadInfo : threadInfos) {
+   dump.append('"');
+   dump.append(threadInfo.getThreadName());
+   dump.append('"');
+   final Thread.State state = threadInfo.getThreadState();
+   dump.append(System.lineSeparator());
+   dump.append("   java.lang.Thread.State: ");
+   dump.append(state);
+   final StackTraceElement[] stackTraceElements = 
threadInfo.getStackTrace();
+   for (final StackTraceElement stackTraceElement : 
stackTraceElements) {
 
 Review comment:
   Fine with me, especially when we consider that the code is probably not 
permanently there.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Failed to testRestoreBehaviourWithFaultyStateHandles 
> (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase) 
> ---
>
> Key: FLINK-9900
> URL: https://issues.apache.org/jira/browse/FLINK-9900
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.1, 1.6.0
>Reporter: zhangminglei
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.5.3, 1.6.1, 1.7.0
>
>
> https://api.travis-ci.org/v3/job/405843617/log.txt
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time ela

[GitHub] yanghua commented on issue #6266: [FLINK-9682] Add setDescription to execution environment and provide description field for the rest api

2018-08-14 Thread GitBox
yanghua commented on issue #6266: [FLINK-9682] Add setDescription to execution 
environment and provide description field for the rest api
URL: https://github.com/apache/flink/pull/6266#issuecomment-412811803
 
 
   @zentol and @tillrohrmann Hope this PR can be reviewed as soon as possible 
so that we can start the second part.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9682) Add setDescription to execution environment and provide description field for the rest api

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579519#comment-16579519
 ] 

ASF GitHub Bot commented on FLINK-9682:
---

yanghua commented on issue #6266: [FLINK-9682] Add setDescription to execution 
environment and provide description field for the rest api
URL: https://github.com/apache/flink/pull/6266#issuecomment-412811803
 
 
   @zentol and @tillrohrmann Hope this PR can be reviewed as soon as possible 
so that we can start the second part.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add setDescription to execution environment and provide description field for 
> the rest api
> --
>
> Key: FLINK-9682
> URL: https://issues.apache.org/jira/browse/FLINK-9682
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API, Webfrontend
>Affects Versions: 1.5.0
>Reporter: Elias Levy
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> Currently you can provide a job name to {{execute}} in the execution 
> environment.  In an environment where many version of a job may be executing, 
> such as a development or test environment, identifying which running job is 
> of a specific version via the UI can be difficult unless the version is 
> embedded into the job name given the {{execute}}.  But the job name is uses 
> for other purposes, such as for namespacing metrics.  Thus, it is not ideal 
> to modify the job name, as that could require modifying metric dashboards and 
> monitors each time versions change.
> I propose a new method be added to the execution environment, 
> {{setDescription}}, that would allow a user to pass in an arbitrary 
> description that would be displayed in the dashboard, allowing users to 
> distinguish jobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha commented on issue #6536: Fix Javadoc links in documentation

2018-08-14 Thread GitBox
aljoscha commented on issue #6536: Fix Javadoc links in documentation
URL: https://github.com/apache/flink/pull/6536#issuecomment-412811997
 
 
   @zentol I already expanded the release guide and I will add 
`javadocs_baseurl` once I merge this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tillrohrmann commented on issue #6475: [FLINK-10012] Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API

2018-08-14 Thread GitBox
tillrohrmann commented on issue #6475: [FLINK-10012] Add 
setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
URL: https://github.com/apache/flink/pull/6475#issuecomment-412812237
 
 
   Thanks for the improving Flink's Python API @yanghua. As far as I can tell, 
the changes look good to me. I would additionally like to hear @zentol opinion 
whether the test is enough.
   
   There is no discussion/proposal yet how to proceed with Flink's Python 
support. There are, however, ideas how it could be tackled. One idea is to 
offer Python support through Beam, because the Python functionality in Beam is 
quite advanced and would allow people to run all kinds of Python libraries in 
their UDFs. Nothing is set in stone yet, but please keep in mind that Flink's 
own Python API might be affected by this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10012) Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579520#comment-16579520
 ] 

ASF GitHub Bot commented on FLINK-10012:


tillrohrmann commented on issue #6475: [FLINK-10012] Add 
setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
URL: https://github.com/apache/flink/pull/6475#issuecomment-412812237
 
 
   Thanks for the improving Flink's Python API @yanghua. As far as I can tell, 
the changes look good to me. I would additionally like to hear @zentol opinion 
whether the test is enough.
   
   There is no discussion/proposal yet how to proceed with Flink's Python 
support. There are, however, ideas how it could be tackled. One idea is to 
offer Python support through Beam, because the Python functionality in Beam is 
quite advanced and would allow people to run all kinds of Python libraries in 
their UDFs. Nothing is set in stone yet, but please keep in mind that Flink's 
own Python API might be affected by this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
> --
>
> Key: FLINK-10012
> URL: https://issues.apache.org/jira/browse/FLINK-10012
> Project: Flink
>  Issue Type: Sub-task
>  Components: Python API
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha closed pull request #6536: Fix Javadoc links in documentation

2018-08-14 Thread GitBox
aljoscha closed pull request #6536: Fix Javadoc links in documentation
URL: https://github.com/apache/flink/pull/6536
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/_config.yml b/docs/_config.yml
index 48f39579241..79c4070eb38 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -31,8 +31,6 @@ version: "1.7-SNAPSHOT"
 # For stable releases, leave the bugfix version out (e.g. 1.2). For snapshot
 # release this should be the same as the regular version
 version_title: "1.7-SNAPSHOT"
-version_javadocs: "1.7-SNAPSHOT"
-version_scaladocs: "1.7-SNAPSHOT"
 
 # This suffix is appended to the Scala-dependent Maven artifact names
 scala_version_suffix: "_2.11"
@@ -48,6 +46,8 @@ download_url: "http://flink.apache.org/downloads.html";
 baseurl: //ci.apache.org/projects/flink/flink-docs-master
 stable_baseurl: //ci.apache.org/projects/flink/flink-docs-stable
 
+javadocs_baseurl: //ci.apache.org/projects/flink/flink-docs-master
+
 # Flag whether this is a stable version or not. Used for the quickstart page.
 is_stable: false
 
diff --git a/docs/_includes/sidenav.html b/docs/_includes/sidenav.html
index 5e50c350d6b..5c5fb51d126 100644
--- a/docs/_includes/sidenav.html
+++ b/docs/_includes/sidenav.html
@@ -126,8 +126,8 @@
   {% endif %}
 {% endfor %}
   
-  https://ci.apache.org/projects/flink/flink-docs-release-{{site.version_javadocs}}/api/java";> Javadocs
-  https://ci.apache.org/projects/flink/flink-docs-release-{{site.version_scaladocs}}/api/scala/index.html#org.apache.flink.api.scala.package";> Scaladocs
+   Javadocs
+   Scaladocs
   http://flink.apache.org";> Project Page
 
 
diff --git a/tools/releasing/create_release_branch.sh 
b/tools/releasing/create_release_branch.sh
index 7e16483ec6c..615722b9c78 100755
--- a/tools/releasing/create_release_branch.sh
+++ b/tools/releasing/create_release_branch.sh
@@ -66,8 +66,6 @@ perl -pi -e "s#^version: .*#version: \"${NEW_VERSION}\"#" 
_config.yml
 # The version in the title should not contain the bugfix version (e.g. 1.3)
 VERSION_TITLE=$(echo $NEW_VERSION | sed 's/\.[^.]*$//')
 perl -pi -e "s#^version_title: .*#version_title: ${VERSION_TITLE}#" _config.yml
-perl -pi -e "s#^version_javadocs: .*#version_javadocs: ${VERSION_TITLE}#" 
_config.yml
-perl -pi -e "s#^version_scaladocs: .*#version_scaladocs: ${VERSION_TITLE}#" 
_config.yml
 cd ..
 
 git commit -am "Commit for release $NEW_VERSION"
diff --git a/tools/releasing/update_branch_version.sh 
b/tools/releasing/update_branch_version.sh
index 951ea7151c8..90fd7820114 100755
--- a/tools/releasing/update_branch_version.sh
+++ b/tools/releasing/update_branch_version.sh
@@ -55,8 +55,6 @@ find . -name 'pom.xml' -type f -exec perl -pi -e 
's#'$OLD_VERSION'

[jira] [Commented] (FLINK-10138) Queryable state (rocksdb) end-to-end test failed on Travis

2018-08-14 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579535#comment-16579535
 ] 

Till Rohrmann commented on FLINK-10138:
---

I guess that it is then not affected by FLINK-8993. Will update the description.

> Queryable state (rocksdb) end-to-end test failed on Travis
> --
>
> Key: FLINK-10138
> URL: https://issues.apache.org/jira/browse/FLINK-10138
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.7.0
>
>
> The {{Queryable state (rocksdb) end-to-end test}} failed on Travis with the 
> following exception
> {code}
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: Failed request 54.
>  Caused by: java.lang.RuntimeException: Failed request 54.
>  Caused by: java.lang.RuntimeException: Error while processing request with 
> ID 54. Caused by: org.apache.flink.util.FlinkRuntimeException: Error while 
> deserializing the user value.
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getValue(RocksDBMapState.java:446)
>   at 
> org.apache.flink.queryablestate.client.state.serialization.KvStateSerializer.serializeMap(KvStateSerializer.java:222)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.getSerializedValue(RocksDBMapState.java:296)
>   at 
> org.apache.flink.queryablestate.server.KvStateServerHandler.getSerializedValue(KvStateServerHandler.java:107)
>   at 
> org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:84)
>   at 
> org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
>   at 
> org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.EOFException: No more bytes left.
>   at 
> org.apache.flink.api.java.typeutils.runtime.NoFetchingInput.require(NoFetchingInput.java:79)
>   at com.esotericsoftware.kryo.io.Input.readString(Input.java:452)
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:132)
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:752)
>   at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
>   at 
> org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:411)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.deserializeUserValue(RocksDBMapState.java:347)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.access$100(RocksDBMapState.java:66)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getValue(RocksDBMapState.java:444)
>   ... 11 more
>   at 
> org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:95)
>   at 
> org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
>   at 
> org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.lambda$run$0(AbstractServerHandler.java:273)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:778)
>   at 
> java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2140)
>   at 
> org.apache.flink.queryablestate.network.AbstractServerHand

[jira] [Updated] (FLINK-10138) Queryable state (rocksdb) end-to-end test failed on Travis

2018-08-14 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-10138:
--
Description: 
The {{Queryable state (rocksdb) end-to-end test}} failed on Travis with the 
following exception
{code}
Exception in thread "main" java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: Failed request 54.
 Caused by: java.lang.RuntimeException: Failed request 54.
 Caused by: java.lang.RuntimeException: Error while processing request with ID 
54. Caused by: org.apache.flink.util.FlinkRuntimeException: Error while 
deserializing the user value.
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getValue(RocksDBMapState.java:446)
at 
org.apache.flink.queryablestate.client.state.serialization.KvStateSerializer.serializeMap(KvStateSerializer.java:222)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.getSerializedValue(RocksDBMapState.java:296)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.getSerializedValue(KvStateServerHandler.java:107)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:84)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.EOFException: No more bytes left.
at 
org.apache.flink.api.java.typeutils.runtime.NoFetchingInput.require(NoFetchingInput.java:79)
at com.esotericsoftware.kryo.io.Input.readString(Input.java:452)
at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:132)
at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:752)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
at 
org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:411)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.deserializeUserValue(RocksDBMapState.java:347)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.access$100(RocksDBMapState.java:66)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getValue(RocksDBMapState.java:444)
... 11 more

at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:95)
at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.lambda$run$0(AbstractServerHandler.java:273)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:778)
at 
java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2140)
at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at 
org.apache.flink.streaming.tests.queryablestate.QsStateC

[jira] [Commented] (FLINK-10109) Add documentation for StreamingFileSink

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579544#comment-16579544
 ] 

ASF GitHub Bot commented on FLINK-10109:


aljoscha closed pull request #6532: [FLINK-10109] Add documentation for 
StreamingFileSink
URL: https://github.com/apache/flink/pull/6532
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/dev/connectors/streamfile_sink.md 
b/docs/dev/connectors/streamfile_sink.md
new file mode 100644
index 000..3db5577f67c
--- /dev/null
+++ b/docs/dev/connectors/streamfile_sink.md
@@ -0,0 +1,123 @@
+---
+title: "Streaming File Sink"
+nav-title: Streaming File Sink
+nav-parent_id: connectors
+nav-pos: 5
+---
+
+
+This connector provides a Sink that writes partitioned files to filesystems
+supported by the Flink `FileSystem` abstraction. Since in streaming the input
+is potentially infinite, the streaming file sink writes data into buckets. The
+bucketing behaviour is configurable but a useful default is time-based
+bucketing where we start writing a new bucket every hour and thus get
+individual files that each contain a part of the infinite output stream.
+
+Within a bucket, we further split the output into smaller part files based on a
+rolling policy. This is useful to prevent individual bucket files from getting
+too big. This is also configurable but the default policy rolls files based on
+file size and a timeout, i.e if no new data was written to a part file. 
+
+The `StreamingFileSink` supports both row-wise encoding formats and
+bulk-encoding formats, such as [Apache Parquet](http://parquet.apache.org).
+
+ Using Row-encoded Output Formats
+
+The only required configuration are the base path where we want to output our
+data and an
+[Encoder]({{ site.baseurl 
}}/api/java/org/apache/flink/api/common/serialization/Encoder.html)
+that is used for serializing records to the `OutputStream` for each file.
+
+Basic usage thus looks like this:
+
+
+
+
+{% highlight java %}
+import org.apache.flink.api.common.serialization.Encoder;
+import org.apache.flink.core.fs.Path;
+import 
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink;
+
+DataStream input = ...;
+
+final StreamingFileSink sink = StreamingFileSink
+   .forRowFormat(new Path(outputPath), (Encoder) (element, stream) 
-> {
+   PrintStream out = new PrintStream(stream);
+   out.println(element.f1);
+   })
+   .build();
+
+input.addSink(sink);
+
+{% endhighlight %}
+
+
+{% highlight scala %}
+import org.apache.flink.api.common.serialization.Encoder
+import org.apache.flink.core.fs.Path
+import 
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink
+
+val input: DataStream[String] = ...
+
+final StreamingFileSink[String] sink = StreamingFileSink
+   .forRowFormat(new Path(outputPath), (element, stream) => {
+   val out = new PrintStream(stream)
+   out.println(element.f1)
+   })
+   .build()
+
+input.addSink(sink)
+
+{% endhighlight %}
+
+
+
+This will create a streaming sink that creates hourly buckets and uses a
+default rolling policy. The default bucket assigner is
+[DateTimeBucketAssigner]({{ site.baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/DateTimeBucketAssigner.html)
+and the default rolling policy is
+[DefaultRollingPolicy]({{ site.baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/rollingpolicies/DefaultRollingPolicy.html).
+You can specify a custom
+[BucketAssigner]({{ site.baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketAssigner.html)
+and
+[RollingPolicy]({{ site.baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/RollingPolicy.html)
+on the sink builder. Please check out the JavaDoc for
+[StreamingFileSink]({{ site.baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/StreamingFileSink.html)
+for more configuration options and more documentation about the workings and
+interactions of bucket assigners and rolling policies.
+
+ Using Bulk-encoded Output Formats
+
+In the above example we used an `Encoder` that can encode or serialize each
+record individually. The streaming file sink also supports bulk-encoded output
+formats such as [Apache Parquet](http://parquet.apache.org). To use these,
+instead of `StreamingFileSink.forRowFormat()` you would use
+`StreamingFileSink.forBulkFormat()` and specify a `BulkWriter.Factory`.
+
+[ParquetAvroWriters]({{ site.baseurl 
}}/api/java/org/apache/flink/formats/parquet/avro/ParquetAvroWriters.html)
+has static methods for creating a `

[jira] [Commented] (FLINK-9853) add hex support in table api and sql

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579543#comment-16579543
 ] 

ASF GitHub Bot commented on FLINK-9853:
---

asfgit closed pull request #6337: [FLINK-9853] [table] Add HEX support 
URL: https://github.com/apache/flink/pull/6337
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/dev/table/sql.md b/docs/dev/table/sql.md
index 148f30727d9..09b9b261b68 100644
--- a/docs/dev/table/sql.md
+++ b/docs/dev/table/sql.md
@@ -1675,6 +1675,17 @@ BIN(numeric)
   
 
 
+
+  
+{% highlight text %}
+HEX(numeric)
+HEX(string)
+  {% endhighlight %}
+  
+  
+Returns a string representation of an integer numeric value or a 
string in hex format. Returns null if numeric is null. E.g. For numeric "20" 
leads to "14", "100" leads to "64", and for string "hello,world" leads to 
"68656c6c6f2c776f726c64".
+  
+
   
 
 
diff --git a/docs/dev/table/tableApi.md b/docs/dev/table/tableApi.md
index b702dddcde5..b5dd4164a7d 100644
--- a/docs/dev/table/tableApi.md
+++ b/docs/dev/table/tableApi.md
@@ -2333,6 +2333,17 @@ NUMERIC.bin()
 

 
+
+ 
+   {% highlight java %}
+NUMERIC.hex()
+STRING.hex()
+{% endhighlight %}
+ 
+
+  Returns a string representation of an integer numeric value or a 
string in hex format. Returns null if numeric is null. E.g. For numeric "20" 
leads to "14", "100" leads to "64", and for string "hello,world" leads to 
"68656c6c6f2c776f726c64".
+
+   
   
 
 
@@ -3921,6 +3932,17 @@ NUMERIC.bin()
 

 
+
+ 
+   {% highlight scala %}
+NUMERIC.hex()
+STRING.hex()
+{% endhighlight %}
+ 
+
+  Returns a string representation of an integer numeric value or a 
string in hex format. Returns null if numeric is null. E.g. For numeric "20" 
leads to "14", "100" leads to "64", and for string "hello,world" leads to 
"68656c6c6f2c776f726c64".
+
+   
   
 
 
diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
index c7c805f6743..1df30d98d11 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
@@ -406,6 +406,13 @@ trait ImplicitExpressionOperations {
 */
   def bin() = Bin(expr)
 
+  /**
+* Returns a string representation of an integer numeric value or a string 
in hex format.
+* Returns null if numeric or string is null. E.g. For numeric "20" leads 
to "14",
+* "100" leads to "64", and for string "hello,world" leads to 
"68656c6c6f2c776f726c64".
+*/
+  def hex() = Hex(expr)
+
   // String operations
 
   /**
diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala
index f5ed9b387de..1e21bfe7830 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala
@@ -132,6 +132,11 @@ object BuiltInMethods {
 
   val BIN = Types.lookupMethod(classOf[JLong], "toBinaryString", classOf[Long])
 
+  val HEX = Types.lookupMethod(classOf[ScalarFunctions], "hex", classOf[Long])
+
+  val HEX_STRING =
+Types.lookupMethod(classOf[ScalarFunctions], "hexString", classOf[String])
+
   val FROMBASE64 = Types.lookupMethod(classOf[ScalarFunctions], "fromBase64", 
classOf[String])
 
   val TOBASE64 = Types.lookupMethod(classOf[ScalarFunctions], "toBase64", 
classOf[String])
diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/FunctionGenerator.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/FunctionGenerator.scala
index 74b69d6afcc..47f54fb229c 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/FunctionGenerator.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/FunctionGenerator.scala
@@ -455,6 +455,18 @@ object FunctionGenerator {
 STRING_TYPE_INFO,
 BuiltInMethods.BIN)
 
+  addSqlFunctionMethod(
+ScalarSqlFunctions.HEX,
+Seq(LONG_TYPE_INFO),
+STRING_TYPE_INFO,
+BuiltInMethods.HEX)
+
+  addSqlFunctionMethod(
+ScalarSqlFunctions.HEX,
+Seq(STRING_TYPE_INFO),
+STRING_TYPE_INFO,
+BuiltInMethods.HEX_STRING)
+
 

[jira] [Commented] (FLINK-10109) Add documentation for StreamingFileSink

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579546#comment-16579546
 ] 

ASF GitHub Bot commented on FLINK-10109:


aljoscha commented on issue #6532: [FLINK-10109] Add documentation for 
StreamingFileSink
URL: https://github.com/apache/flink/pull/6532#issuecomment-412815785
 
 
   Merged, thanks for the reviews!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add documentation for StreamingFileSink
> ---
>
> Key: FLINK-10109
> URL: https://issues.apache.org/jira/browse/FLINK-10109
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Affects Versions: 1.6.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha closed pull request #6532: [FLINK-10109] Add documentation for StreamingFileSink

2018-08-14 Thread GitBox
aljoscha closed pull request #6532: [FLINK-10109] Add documentation for 
StreamingFileSink
URL: https://github.com/apache/flink/pull/6532
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/dev/connectors/streamfile_sink.md 
b/docs/dev/connectors/streamfile_sink.md
new file mode 100644
index 000..3db5577f67c
--- /dev/null
+++ b/docs/dev/connectors/streamfile_sink.md
@@ -0,0 +1,123 @@
+---
+title: "Streaming File Sink"
+nav-title: Streaming File Sink
+nav-parent_id: connectors
+nav-pos: 5
+---
+
+
+This connector provides a Sink that writes partitioned files to filesystems
+supported by the Flink `FileSystem` abstraction. Since in streaming the input
+is potentially infinite, the streaming file sink writes data into buckets. The
+bucketing behaviour is configurable but a useful default is time-based
+bucketing where we start writing a new bucket every hour and thus get
+individual files that each contain a part of the infinite output stream.
+
+Within a bucket, we further split the output into smaller part files based on a
+rolling policy. This is useful to prevent individual bucket files from getting
+too big. This is also configurable but the default policy rolls files based on
+file size and a timeout, i.e if no new data was written to a part file. 
+
+The `StreamingFileSink` supports both row-wise encoding formats and
+bulk-encoding formats, such as [Apache Parquet](http://parquet.apache.org).
+
+ Using Row-encoded Output Formats
+
+The only required configuration are the base path where we want to output our
+data and an
+[Encoder]({{ site.baseurl 
}}/api/java/org/apache/flink/api/common/serialization/Encoder.html)
+that is used for serializing records to the `OutputStream` for each file.
+
+Basic usage thus looks like this:
+
+
+
+
+{% highlight java %}
+import org.apache.flink.api.common.serialization.Encoder;
+import org.apache.flink.core.fs.Path;
+import 
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink;
+
+DataStream input = ...;
+
+final StreamingFileSink sink = StreamingFileSink
+   .forRowFormat(new Path(outputPath), (Encoder) (element, stream) 
-> {
+   PrintStream out = new PrintStream(stream);
+   out.println(element.f1);
+   })
+   .build();
+
+input.addSink(sink);
+
+{% endhighlight %}
+
+
+{% highlight scala %}
+import org.apache.flink.api.common.serialization.Encoder
+import org.apache.flink.core.fs.Path
+import 
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink
+
+val input: DataStream[String] = ...
+
+final StreamingFileSink[String] sink = StreamingFileSink
+   .forRowFormat(new Path(outputPath), (element, stream) => {
+   val out = new PrintStream(stream)
+   out.println(element.f1)
+   })
+   .build()
+
+input.addSink(sink)
+
+{% endhighlight %}
+
+
+
+This will create a streaming sink that creates hourly buckets and uses a
+default rolling policy. The default bucket assigner is
+[DateTimeBucketAssigner]({{ site.baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/DateTimeBucketAssigner.html)
+and the default rolling policy is
+[DefaultRollingPolicy]({{ site.baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/rollingpolicies/DefaultRollingPolicy.html).
+You can specify a custom
+[BucketAssigner]({{ site.baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketAssigner.html)
+and
+[RollingPolicy]({{ site.baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/RollingPolicy.html)
+on the sink builder. Please check out the JavaDoc for
+[StreamingFileSink]({{ site.baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/StreamingFileSink.html)
+for more configuration options and more documentation about the workings and
+interactions of bucket assigners and rolling policies.
+
+ Using Bulk-encoded Output Formats
+
+In the above example we used an `Encoder` that can encode or serialize each
+record individually. The streaming file sink also supports bulk-encoded output
+formats such as [Apache Parquet](http://parquet.apache.org). To use these,
+instead of `StreamingFileSink.forRowFormat()` you would use
+`StreamingFileSink.forBulkFormat()` and specify a `BulkWriter.Factory`.
+
+[ParquetAvroWriters]({{ site.baseurl 
}}/api/java/org/apache/flink/formats/parquet/avro/ParquetAvroWriters.html)
+has static methods for creating a `BulkWriter.Factory` for various types.
+
+
+IMPORTANT: Bulk-encoding formats can only be combined with the
+`OnCheckpointRollingPolicy`, which rolls the in-progress part file on
+every checkpoint.
+
+
+{% top %}


 

---

[GitHub] asfgit closed pull request #6337: [FLINK-9853] [table] Add HEX support

2018-08-14 Thread GitBox
asfgit closed pull request #6337: [FLINK-9853] [table] Add HEX support 
URL: https://github.com/apache/flink/pull/6337
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/dev/table/sql.md b/docs/dev/table/sql.md
index 148f30727d9..09b9b261b68 100644
--- a/docs/dev/table/sql.md
+++ b/docs/dev/table/sql.md
@@ -1675,6 +1675,17 @@ BIN(numeric)
   
 
 
+
+  
+{% highlight text %}
+HEX(numeric)
+HEX(string)
+  {% endhighlight %}
+  
+  
+Returns a string representation of an integer numeric value or a 
string in hex format. Returns null if numeric is null. E.g. For numeric "20" 
leads to "14", "100" leads to "64", and for string "hello,world" leads to 
"68656c6c6f2c776f726c64".
+  
+
   
 
 
diff --git a/docs/dev/table/tableApi.md b/docs/dev/table/tableApi.md
index b702dddcde5..b5dd4164a7d 100644
--- a/docs/dev/table/tableApi.md
+++ b/docs/dev/table/tableApi.md
@@ -2333,6 +2333,17 @@ NUMERIC.bin()
 

 
+
+ 
+   {% highlight java %}
+NUMERIC.hex()
+STRING.hex()
+{% endhighlight %}
+ 
+
+  Returns a string representation of an integer numeric value or a 
string in hex format. Returns null if numeric is null. E.g. For numeric "20" 
leads to "14", "100" leads to "64", and for string "hello,world" leads to 
"68656c6c6f2c776f726c64".
+
+   
   
 
 
@@ -3921,6 +3932,17 @@ NUMERIC.bin()
 

 
+
+ 
+   {% highlight scala %}
+NUMERIC.hex()
+STRING.hex()
+{% endhighlight %}
+ 
+
+  Returns a string representation of an integer numeric value or a 
string in hex format. Returns null if numeric is null. E.g. For numeric "20" 
leads to "14", "100" leads to "64", and for string "hello,world" leads to 
"68656c6c6f2c776f726c64".
+
+   
   
 
 
diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
index c7c805f6743..1df30d98d11 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
@@ -406,6 +406,13 @@ trait ImplicitExpressionOperations {
 */
   def bin() = Bin(expr)
 
+  /**
+* Returns a string representation of an integer numeric value or a string 
in hex format.
+* Returns null if numeric or string is null. E.g. For numeric "20" leads 
to "14",
+* "100" leads to "64", and for string "hello,world" leads to 
"68656c6c6f2c776f726c64".
+*/
+  def hex() = Hex(expr)
+
   // String operations
 
   /**
diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala
index f5ed9b387de..1e21bfe7830 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala
@@ -132,6 +132,11 @@ object BuiltInMethods {
 
   val BIN = Types.lookupMethod(classOf[JLong], "toBinaryString", classOf[Long])
 
+  val HEX = Types.lookupMethod(classOf[ScalarFunctions], "hex", classOf[Long])
+
+  val HEX_STRING =
+Types.lookupMethod(classOf[ScalarFunctions], "hexString", classOf[String])
+
   val FROMBASE64 = Types.lookupMethod(classOf[ScalarFunctions], "fromBase64", 
classOf[String])
 
   val TOBASE64 = Types.lookupMethod(classOf[ScalarFunctions], "toBase64", 
classOf[String])
diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/FunctionGenerator.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/FunctionGenerator.scala
index 74b69d6afcc..47f54fb229c 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/FunctionGenerator.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/calls/FunctionGenerator.scala
@@ -455,6 +455,18 @@ object FunctionGenerator {
 STRING_TYPE_INFO,
 BuiltInMethods.BIN)
 
+  addSqlFunctionMethod(
+ScalarSqlFunctions.HEX,
+Seq(LONG_TYPE_INFO),
+STRING_TYPE_INFO,
+BuiltInMethods.HEX)
+
+  addSqlFunctionMethod(
+ScalarSqlFunctions.HEX,
+Seq(STRING_TYPE_INFO),
+STRING_TYPE_INFO,
+BuiltInMethods.HEX_STRING)
+
   // 
--
   // Temporal functions
   // 
--
diff --git 
a/flink-libraries

[jira] [Resolved] (FLINK-9853) add hex support in table api and sql

2018-08-14 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-9853.
-
   Resolution: Fixed
 Assignee: xueyu
Fix Version/s: 1.7.0

Fixed in 1.7.0: ab1d1dfb2ad872748833896d552e2d56a26a9a92

> add hex support in table api and sql
> 
>
> Key: FLINK-9853
> URL: https://issues.apache.org/jira/browse/FLINK-9853
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: xueyu
>Assignee: xueyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> like in mysql, HEX could take int or string arguments, For a integer argument 
> N, it returns a hexadecimal string representation of the value of N. For a 
> string argument str, it returns a hexadecimal string representation of str 
> where each byte of each character in str is converted to two hexadecimal 
> digits. 
> Syntax:
> HEX(100) = 64
> HEX('This is a test String.') = '546869732069732061207465737420537472696e672e'
> See more: [link 
> MySQL|https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_hex]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha commented on issue #6532: [FLINK-10109] Add documentation for StreamingFileSink

2018-08-14 Thread GitBox
aljoscha commented on issue #6532: [FLINK-10109] Add documentation for 
StreamingFileSink
URL: https://github.com/apache/flink/pull/6532#issuecomment-412815785
 
 
   Merged, thanks for the reviews!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-10109) Add documentation for StreamingFileSink

2018-08-14 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-10109.

   Resolution: Fixed
Fix Version/s: 1.7.0
   1.6.1

Added on release-1.6 in
b0382564e15fa5d6993b1e0f5331c7da09da6cd1

Added on master in
0dc351cb9678c55a1840711e7a2be582776ac5ef

> Add documentation for StreamingFileSink
> ---
>
> Key: FLINK-10109
> URL: https://issues.apache.org/jira/browse/FLINK-10109
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Affects Versions: 1.6.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] twalthr commented on issue #6337: [FLINK-9853] [table] Add HEX support

2018-08-14 Thread GitBox
twalthr commented on issue #6337: [FLINK-9853] [table] Add HEX support 
URL: https://github.com/apache/flink/pull/6337#issuecomment-412816367
 
 
   @xueyumusic for future PRs use `git rebase master YOURBRANCH` instead of 
merge commits. Thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10109) Add documentation for StreamingFileSink

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579550#comment-16579550
 ] 

ASF GitHub Bot commented on FLINK-10109:


aljoscha commented on issue #6532: [FLINK-10109] Add documentation for 
StreamingFileSink
URL: https://github.com/apache/flink/pull/6532#issuecomment-412816175
 
 
   @zentol Ah, I saw that typo fix too late. Will fix now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add documentation for StreamingFileSink
> ---
>
> Key: FLINK-10109
> URL: https://issues.apache.org/jira/browse/FLINK-10109
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Affects Versions: 1.6.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha commented on issue #6532: [FLINK-10109] Add documentation for StreamingFileSink

2018-08-14 Thread GitBox
aljoscha commented on issue #6532: [FLINK-10109] Add documentation for 
StreamingFileSink
URL: https://github.com/apache/flink/pull/6532#issuecomment-412816175
 
 
   @zentol Ah, I saw that typo fix too late. Will fix now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9853) add hex support in table api and sql

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579553#comment-16579553
 ] 

ASF GitHub Bot commented on FLINK-9853:
---

twalthr commented on issue #6337: [FLINK-9853] [table] Add HEX support 
URL: https://github.com/apache/flink/pull/6337#issuecomment-412816367
 
 
   @xueyumusic for future PRs use `git rebase master YOURBRANCH` instead of 
merge commits. Thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> add hex support in table api and sql
> 
>
> Key: FLINK-9853
> URL: https://issues.apache.org/jira/browse/FLINK-9853
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: xueyu
>Assignee: xueyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> like in mysql, HEX could take int or string arguments, For a integer argument 
> N, it returns a hexadecimal string representation of the value of N. For a 
> string argument str, it returns a hexadecimal string representation of str 
> where each byte of each character in str is converted to two hexadecimal 
> digits. 
> Syntax:
> HEX(100) = 64
> HEX('This is a test String.') = '546869732069732061207465737420537472696e672e'
> See more: [link 
> MySQL|https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_hex]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #6475: [FLINK-10012] Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API

2018-08-14 Thread GitBox
zentol commented on a change in pull request #6475: [FLINK-10012] Add 
setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
URL: https://github.com/apache/flink/pull/6475#discussion_r209891304
 
 

 ##
 File path: 
flink-libraries/flink-streaming-python/src/test/python/org/apache/flink/streaming/python/api/test_stream_execution_env.py
 ##
 @@ -0,0 +1,30 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+from org.apache.flink.streaming.api import  TimeCharacteristic
 
 Review comment:
   double space after `import`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on a change in pull request #6475: [FLINK-10012] Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API

2018-08-14 Thread GitBox
zentol commented on a change in pull request #6475: [FLINK-10012] Add 
setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
URL: https://github.com/apache/flink/pull/6475#discussion_r209890119
 
 

 ##
 File path: 
flink-libraries/flink-streaming-python/src/test/java/org/apache/flink/streaming/python/api/PythonStreamBinderTest.java
 ##
 @@ -63,7 +63,7 @@ private static Path findUtilsModule() {
 
@Test
public void testProgram() throws Exception {
-   Path testEntryPoint = new Path(getBaseTestPythonDir(), 
"examples/word_count.py");
+   Path testEntryPoint = new Path(getBaseTestPythonDir(), 
"run_all_tests.py");
 
 Review comment:
   nice catch, but please move this into a separate commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10012) Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579557#comment-16579557
 ] 

ASF GitHub Bot commented on FLINK-10012:


zentol commented on a change in pull request #6475: [FLINK-10012] Add 
setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
URL: https://github.com/apache/flink/pull/6475#discussion_r209891304
 
 

 ##
 File path: 
flink-libraries/flink-streaming-python/src/test/python/org/apache/flink/streaming/python/api/test_stream_execution_env.py
 ##
 @@ -0,0 +1,30 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+from org.apache.flink.streaming.api import  TimeCharacteristic
 
 Review comment:
   double space after `import`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
> --
>
> Key: FLINK-10012
> URL: https://issues.apache.org/jira/browse/FLINK-10012
> Project: Flink
>  Issue Type: Sub-task
>  Components: Python API
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10074) Allowable number of checkpoint failures

2018-08-14 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579555#comment-16579555
 ] 

Till Rohrmann commented on FLINK-10074:
---

Since {{setFailOnCheckpointingErrors}} is public, we cannot simply change its 
signature. What we could do though, is to add another method 
{{setNumberTolerableCheckpointFailures(int)}} which is set by default to {{0}} 
and is only respected if {{setFailOnCheckpointingErrors}} is set to {{true}}. 
So if the the user on calls {{setFailOnCheckpointingErrors(true)}} then he will 
get the same old behaviour. Only after calling 
{{setNumberTolerableCheckpointFailures(10)}}, it will wait for 10 checkpoint 
failures before failing. If {{setNumberTolerableCheckpointFailures}} is set but 
{{setFailOnCheckpointingErrors(false)}}, then checkpoint failures won't fail 
the job.

[~thw] would you not reset the counter in case of a restart? This would be hard 
to do in case of a JobManager failover and lead to different behaviours 
depending on the actual fault.

> Allowable number of checkpoint failures 
> 
>
> Key: FLINK-10074
> URL: https://issues.apache.org/jira/browse/FLINK-10074
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Reporter: Thomas Weise
>Assignee: vinoyang
>Priority: Major
>
> For intermittent checkpoint failures it is desirable to have a mechanism to 
> avoid restarts. If, for example, a transient S3 error prevents checkpoint 
> completion, the next checkpoint may very well succeed. The user may wish to 
> not incur the expense of restart under such scenario and this could be 
> expressed with a failure threshold (number of subsequent checkpoint 
> failures), possibly combined with a list of exceptions to tolerate.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10012) Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579556#comment-16579556
 ] 

ASF GitHub Bot commented on FLINK-10012:


zentol commented on a change in pull request #6475: [FLINK-10012] Add 
setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
URL: https://github.com/apache/flink/pull/6475#discussion_r209890119
 
 

 ##
 File path: 
flink-libraries/flink-streaming-python/src/test/java/org/apache/flink/streaming/python/api/PythonStreamBinderTest.java
 ##
 @@ -63,7 +63,7 @@ private static Path findUtilsModule() {
 
@Test
public void testProgram() throws Exception {
-   Path testEntryPoint = new Path(getBaseTestPythonDir(), 
"examples/word_count.py");
+   Path testEntryPoint = new Path(getBaseTestPythonDir(), 
"run_all_tests.py");
 
 Review comment:
   nice catch, but please move this into a separate commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
> --
>
> Key: FLINK-10012
> URL: https://issues.apache.org/jira/browse/FLINK-10012
> Project: Flink
>  Issue Type: Sub-task
>  Components: Python API
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10119) JsonRowDeserializationSchema deserialize kafka message

2018-08-14 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579566#comment-16579566
 ] 

Timo Walther commented on FLINK-10119:
--

Yes, we can add more configuration possibilities.

> JsonRowDeserializationSchema deserialize kafka message
> --
>
> Key: FLINK-10119
> URL: https://issues.apache.org/jira/browse/FLINK-10119
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.5.1
> Environment: 无
>Reporter: sean.miao
>Assignee: buptljy
>Priority: Major
>
> Recently, we are using Kafka010JsonTableSource to process kafka's json 
> messages.We turned on checkpoint and auto-restart strategy .
> We found that as long as the format of a message is not json, it will cause 
> the job to not be pulled up. Of course, this is to ensure that only once 
> processing or at least once processing, but the resulting application is not 
> available and has a greater impact on us.
> the code is :
> class : JsonRowDeserializationSchema
> function :
> @Override
>  public Row deserialize(byte[] message) throws IOException {
>  try
> { final JsonNode root = objectMapper.readTree(message); return 
> convertRow(root, (RowTypeInfo) typeInfo); }
> catch (Throwable t)
> { throw new IOException("Failed to deserialize JSON object.", t); }
> }
> now ,i change it to  :
> public Row deserialize(byte[] message) throws IOException {
>  try
> { JsonNode root = this.objectMapper.readTree(message); return 
> this.convertRow(root, (RowTypeInfo)this.typeInfo); }
> catch (Throwable var4) {
>  message = this.objectMapper.writeValueAsBytes("{}");
>  JsonNode root = this.objectMapper.readTree(message);
>  return this.convertRow(root, (RowTypeInfo)this.typeInfo);
>  }
>  }
>  
> I think that data format errors are inevitable during network transmission, 
> so can we add a new column to the table for the wrong data format? like spark 
> sql does。
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tillrohrmann commented on issue #6539: [FLINK-10123] Use ExecutorThreadFactory instead of DefaultThreadFactory in RestServer/Client

2018-08-14 Thread GitBox
tillrohrmann commented on issue #6539: [FLINK-10123] Use ExecutorThreadFactory 
instead of DefaultThreadFactory in RestServer/Client
URL: https://github.com/apache/flink/pull/6539#issuecomment-412817474
 
 
   Thanks @yanghua. Fixed the PR. Let's see what Travis says this time :-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-10141) Reduce lock contention introduced with 1.5

2018-08-14 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-10141:
---

 Summary: Reduce lock contention introduced with 1.5
 Key: FLINK-10141
 URL: https://issues.apache.org/jira/browse/FLINK-10141
 Project: Flink
  Issue Type: Bug
  Components: Network
Affects Versions: 1.6.0, 1.5.2, 1.7.0
Reporter: Nico Kruber
Assignee: Nico Kruber


With the changes around introducing credit-based flow control as well as the 
low latency changes, unfortunately, we also introduced some lock contention on 
{{RemoteInputChannel#bufferQueue}} and {{RemoteInputChannel#receivedBuffers}} 
as well as some unnecessary reads of atomic booleans like 
{{inputChannel.isReleased()}} in some scenarios.

This was observed as a high idle CPU load with no events in the stream but only 
watermarks (every 500ms) and many slots on a single machine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10123) Use ExecutorThreadFactory instead of DefaultThreadFactory in RestServer/Client

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579568#comment-16579568
 ] 

ASF GitHub Bot commented on FLINK-10123:


tillrohrmann commented on issue #6539: [FLINK-10123] Use ExecutorThreadFactory 
instead of DefaultThreadFactory in RestServer/Client
URL: https://github.com/apache/flink/pull/6539#issuecomment-412817474
 
 
   Thanks @yanghua. Fixed the PR. Let's see what Travis says this time :-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Use ExecutorThreadFactory instead of DefaultThreadFactory in RestServer/Client
> --
>
> Key: FLINK-10123
> URL: https://issues.apache.org/jira/browse/FLINK-10123
> Project: Flink
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Instead of using the {{DefaultThreadFactory}} in the 
> {{RestServerEndpoint}}/{{RestClient}} we should use the 
> {{ExecutorThreadFactory}} because it uses the {{FatalExitExceptionHandler}} 
> per default as the uncaught exception handler. This should guard against 
> uncaught exceptions by simply terminating the JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-10119) JsonRowDeserializationSchema deserialize kafka message

2018-08-14 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579566#comment-16579566
 ] 

Timo Walther edited comment on FLINK-10119 at 8/14/18 9:51 AM:
---

Yes, we can add more configuration possibilities:
- One could be ignore the entire record in this case.
- The other one could be add a `errors` (with configured name) field to the row 
that might be null or contains the JSON parsing exception message.


was (Author: twalthr):
Yes, we can add more configuration possibilities.

> JsonRowDeserializationSchema deserialize kafka message
> --
>
> Key: FLINK-10119
> URL: https://issues.apache.org/jira/browse/FLINK-10119
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.5.1
> Environment: 无
>Reporter: sean.miao
>Assignee: buptljy
>Priority: Major
>
> Recently, we are using Kafka010JsonTableSource to process kafka's json 
> messages.We turned on checkpoint and auto-restart strategy .
> We found that as long as the format of a message is not json, it will cause 
> the job to not be pulled up. Of course, this is to ensure that only once 
> processing or at least once processing, but the resulting application is not 
> available and has a greater impact on us.
> the code is :
> class : JsonRowDeserializationSchema
> function :
> @Override
>  public Row deserialize(byte[] message) throws IOException {
>  try
> { final JsonNode root = objectMapper.readTree(message); return 
> convertRow(root, (RowTypeInfo) typeInfo); }
> catch (Throwable t)
> { throw new IOException("Failed to deserialize JSON object.", t); }
> }
> now ,i change it to  :
> public Row deserialize(byte[] message) throws IOException {
>  try
> { JsonNode root = this.objectMapper.readTree(message); return 
> this.convertRow(root, (RowTypeInfo)this.typeInfo); }
> catch (Throwable var4) {
>  message = this.objectMapper.writeValueAsBytes("{}");
>  JsonNode root = this.objectMapper.readTree(message);
>  return this.convertRow(root, (RowTypeInfo)this.typeInfo);
>  }
>  }
>  
> I think that data format errors are inevitable during network transmission, 
> so can we add a new column to the table for the wrong data format? like spark 
> sql does。
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-10119) JsonRowDeserializationSchema deserialize kafka message

2018-08-14 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579566#comment-16579566
 ] 

Timo Walther edited comment on FLINK-10119 at 8/14/18 9:57 AM:
---

Yes, we can add more configuration possibilities:
- One could be ignore the entire record in this case (set all fields to null).
- The other one could be add a `errors` (with configured name) field to the row 
that might be null or contains the JSON parsing exception message.


was (Author: twalthr):
Yes, we can add more configuration possibilities:
- One could be ignore the entire record in this case.
- The other one could be add a `errors` (with configured name) field to the row 
that might be null or contains the JSON parsing exception message.

> JsonRowDeserializationSchema deserialize kafka message
> --
>
> Key: FLINK-10119
> URL: https://issues.apache.org/jira/browse/FLINK-10119
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.5.1
> Environment: 无
>Reporter: sean.miao
>Assignee: buptljy
>Priority: Major
>
> Recently, we are using Kafka010JsonTableSource to process kafka's json 
> messages.We turned on checkpoint and auto-restart strategy .
> We found that as long as the format of a message is not json, it will cause 
> the job to not be pulled up. Of course, this is to ensure that only once 
> processing or at least once processing, but the resulting application is not 
> available and has a greater impact on us.
> the code is :
> class : JsonRowDeserializationSchema
> function :
> @Override
>  public Row deserialize(byte[] message) throws IOException {
>  try
> { final JsonNode root = objectMapper.readTree(message); return 
> convertRow(root, (RowTypeInfo) typeInfo); }
> catch (Throwable t)
> { throw new IOException("Failed to deserialize JSON object.", t); }
> }
> now ,i change it to  :
> public Row deserialize(byte[] message) throws IOException {
>  try
> { JsonNode root = this.objectMapper.readTree(message); return 
> this.convertRow(root, (RowTypeInfo)this.typeInfo); }
> catch (Throwable var4) {
>  message = this.objectMapper.writeValueAsBytes("{}");
>  JsonNode root = this.objectMapper.readTree(message);
>  return this.convertRow(root, (RowTypeInfo)this.typeInfo);
>  }
>  }
>  
> I think that data format errors are inevitable during network transmission, 
> so can we add a new column to the table for the wrong data format? like spark 
> sql does。
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7205) Add UUID supported in TableAPI/SQL

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579583#comment-16579583
 ] 

ASF GitHub Bot commented on FLINK-7205:
---

twalthr commented on issue #6381: [FLINK-7205] [table] Add UUID supported in 
SQL and TableApi
URL: https://github.com/apache/flink/pull/6381#issuecomment-412820572
 
 
   Thank you @buptljy. Merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add UUID supported in TableAPI/SQL
> --
>
> Key: FLINK-7205
> URL: https://issues.apache.org/jira/browse/FLINK-7205
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>  Labels: pull-request-available
>
> UUID() returns a value that conforms to UUID version 1 as described in RFC 
> 4122. The value is a 128-bit number represented as a utf8 string of five 
> hexadecimal numbers in ---- format:
> The first three numbers are generated from the low, middle, and high parts of 
> a timestamp. The high part also includes the UUID version number.
> The fourth number preserves temporal uniqueness in case the timestamp value 
> loses monotonicity (for example, due to daylight saving time).
> The fifth number is an IEEE 802 node number that provides spatial uniqueness. 
> A random number is substituted if the latter is not available (for example, 
> because the host device has no Ethernet card, or it is unknown how to find 
> the hardware address of an interface on the host operating system). In this 
> case, spatial uniqueness cannot be guaranteed. Nevertheless, a collision 
> should have very low probability.
> See: [RFC 4122: 
> http://www.ietf.org/rfc/rfc4122.txt|http://www.ietf.org/rfc/rfc4122.txt]
> See detailed semantics:
>MySql: 
> [https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid|https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid]
> Welcome anybody feedback -:).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] twalthr commented on issue #6381: [FLINK-7205] [table] Add UUID supported in SQL and TableApi

2018-08-14 Thread GitBox
twalthr commented on issue #6381: [FLINK-7205] [table] Add UUID supported in 
SQL and TableApi
URL: https://github.com/apache/flink/pull/6381#issuecomment-412820572
 
 
   Thank you @buptljy. Merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >