[jira] [Commented] (FLINK-9190) YarnResourceManager sometimes does not request new Containers

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598274#comment-16598274
 ] 

ASF GitHub Bot commented on FLINK-9190:
---

TisonKun commented on issue #5931: [FLINK-9190][flip6,yarn] Request new 
container if container completed unexpectedly
URL: https://github.com/apache/flink/pull/5931#issuecomment-417558343
 
 
   @tillrohrmann thanks for your reply. Here is one more question.
   Does `ResourceManager` asks for slot by itself? With current codebase 
`ResourceManager` would ask for new worker as container complete unexpectedly. 
What if `ExecutionGraph` failover concurrent to  `ResourceManager` asks for new 
worker? Is there one more extra worker be started?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> YarnResourceManager sometimes does not request new Containers
> -
>
> Key: FLINK-9190
> URL: https://issues.apache.org/jira/browse/FLINK-9190
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, YARN
>Affects Versions: 1.5.0
> Environment: Hadoop 2.8.3
> ZooKeeper 3.4.5
> Flink 71c3cd2781d36e0a03d022a38cc4503d343f7ff8
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Blocker
>  Labels: flip-6, pull-request-available
> Fix For: 1.5.0
>
> Attachments: yarn-logs
>
>
> *Description*
> The {{YarnResourceManager}} does not request new containers if 
> {{TaskManagers}} are killed rapidly in succession. After 5 minutes the job is 
> restarted due to {{NoResourceAvailableException}}, and the job runs normally 
> afterwards. I suspect that {{TaskManager}} failures are not registered if the 
> failure occurs before the {{TaskManager}} registers with the master. Logs are 
> attached; I added additional log statements to 
> {{YarnResourceManager.onContainersCompleted}} and 
> {{YarnResourceManager.onContainersAllocated}}.
> *Expected Behavior*
> The {{YarnResourceManager}} should recognize that the container is completed 
> and keep requesting new containers. The job should run as soon as resources 
> are available. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun commented on issue #5931: [FLINK-9190][flip6, yarn] Request new container if container completed unexpectedly

2018-08-30 Thread GitBox
TisonKun commented on issue #5931: [FLINK-9190][flip6,yarn] Request new 
container if container completed unexpectedly
URL: https://github.com/apache/flink/pull/5931#issuecomment-417558343
 
 
   @tillrohrmann thanks for your reply. Here is one more question.
   Does `ResourceManager` asks for slot by itself? With current codebase 
`ResourceManager` would ask for new worker as container complete unexpectedly. 
What if `ExecutionGraph` failover concurrent to  `ResourceManager` asks for new 
worker? Is there one more extra worker be started?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-10265) Configure checkpointing behavior for SQL Client

2018-08-30 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-10265:


Assignee: vinoyang

> Configure checkpointing behavior for SQL Client
> ---
>
> Key: FLINK-10265
> URL: https://issues.apache.org/jira/browse/FLINK-10265
> Project: Flink
>  Issue Type: New Feature
>  Components: SQL Client
>Reporter: Timo Walther
>Assignee: vinoyang
>Priority: Major
>
> The SQL Client environment file should expose checkpointing related 
> properties:
> - enable checkpointing
> - checkpointing interval
> - mode
> - timeout
> - etc. see {{org.apache.flink.streaming.api.environment.CheckpointConfig}}
> Per-job selection of state backends and their configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10262) Web UI does not show error stack trace any more when program submission fails

2018-08-30 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-10262:


Assignee: vinoyang

> Web UI does not show error stack trace any more when program submission fails
> -
>
> Key: FLINK-10262
> URL: https://issues.apache.org/jira/browse/FLINK-10262
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.6.0
>Reporter: Stephan Ewen
>Assignee: vinoyang
>Priority: Major
>
> Earlier versions reported the stack trace of exceptions that occurred in the 
> program.
> Flink 1.6 shows only 
> {{org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error.}}, which makes debugging harder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10258) Allow streaming sources to be present for batch executions

2018-08-30 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-10258:


Assignee: vinoyang

> Allow streaming sources to be present for batch executions 
> ---
>
> Key: FLINK-10258
> URL: https://issues.apache.org/jira/browse/FLINK-10258
> Project: Flink
>  Issue Type: Bug
>  Components: SQL Client
>Reporter: Timo Walther
>Assignee: vinoyang
>Priority: Major
>
> For example, if a filesystem connector with CSV format is defined and an 
> update mode has been set. When switching to {{SET execution.type=batch}} in 
> CLI the connector is not valid anymore and an exception blocks the execution 
> of new SQL statements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] Clarkkkkk opened a new pull request #6639: [hotfix][doc][sql-client] Modify typo in sql client doc

2018-08-30 Thread GitBox
Clark opened a new pull request #6639: [hotfix][doc][sql-client] Modify 
typo in sql client doc
URL: https://github.com/apache/flink/pull/6639
 
 
   This closes #6635


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10230) Support printing the query of a view

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598190#comment-16598190
 ] 

ASF GitHub Bot commented on FLINK-10230:


yanghua commented on issue #6637: [FLINK-10230][SQL Client] Support printing 
the query of a view
URL: https://github.com/apache/flink/pull/6637#issuecomment-417540203
 
 
   @twalthr @fhueske 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support printing the query of a view
> 
>
> Key: FLINK-10230
> URL: https://issues.apache.org/jira/browse/FLINK-10230
> Project: Flink
>  Issue Type: New Feature
>  Components: SQL Client
>Reporter: Timo Walther
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-10163 added initial support for views in SQL Client. We should add a 
> command that allows for printing the query of a view for debugging. MySQL 
> offers {{SHOW CREATE VIEW}} for this. Hive generalizes this to {{SHOW CREATE 
> TABLE}}. The latter one could be extended to also show information about the 
> used table factories and properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] Clarkkkkk closed pull request #6635: [hotfix][doc][sql-client] Fix the example configuration yaml file by …

2018-08-30 Thread GitBox
Clark closed pull request #6635: [hotfix][doc][sql-client] Fix the example 
configuration yaml file by …
URL: https://github.com/apache/flink/pull/6635
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/dev/table/sqlClient.md b/docs/dev/table/sqlClient.md
index 0e8d2d651b1..18d1984b728 100644
--- a/docs/dev/table/sqlClient.md
+++ b/docs/dev/table/sqlClient.md
@@ -279,7 +279,7 @@ tables:
 connector:
   property-version: 1
   type: kafka
-  version: 0.11
+  version: "0.11"
   topic: TaxiRides
   startup-mode: earliest-offset
   properties:
@@ -422,7 +422,7 @@ tables:
 connector:
   property-version: 1
   type: kafka
-  version: 0.11
+  version: "0.11"
   topic: OutputTopic
   properties:
 - key: zookeeper.connect


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yanghua commented on issue #6637: [FLINK-10230][SQL Client] Support printing the query of a view

2018-08-30 Thread GitBox
yanghua commented on issue #6637: [FLINK-10230][SQL Client] Support printing 
the query of a view
URL: https://github.com/apache/flink/pull/6637#issuecomment-417540203
 
 
   @twalthr @fhueske 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lamber-ken commented on issue #6625: [hotfix][docs] Fix some error about the example of stateful source function in state documentatition

2018-08-30 Thread GitBox
lamber-ken commented on issue #6625: [hotfix][docs] Fix some error about the 
example of stateful source function in state documentatition
URL: https://github.com/apache/flink/pull/6625#issuecomment-417539666
 
 
   cc @dawidwys


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7964) Add Apache Kafka 1.0/1.1 connectors

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598175#comment-16598175
 ] 

ASF GitHub Bot commented on FLINK-7964:
---

yanghua commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 
connectors
URL: https://github.com/apache/flink/pull/6577#issuecomment-417538633
 
 
   @eliaslevy Things may not be as simple as we are not just using kafka's 
Public API, but also some non-public APIs and using different Kafka internal 
fields through reflection. These codes will change with different kafka 
clients. If we only maintain a connector, then this connector will be filled 
with the special case handling of the entire kafka client, and if/else 
statements are everywhere, it may be more difficult to maintain.
   
   Maybe you can look at these 
[links](https://github.com/apache/flink/pull/6577#issuecomment-414951008) 
inside.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Apache Kafka 1.0/1.1 connectors
> ---
>
> Key: FLINK-7964
> URL: https://issues.apache.org/jira/browse/FLINK-7964
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Kafka 1.0.0 is no mere bump of the version number. The Apache Kafka Project 
> Management Committee has packed a number of valuable enhancements into the 
> release. Here is a summary of a few of them:
> * Since its introduction in version 0.10, the Streams API has become hugely 
> popular among Kafka users, including the likes of Pinterest, Rabobank, 
> Zalando, and The New York Times. In 1.0, the the API continues to evolve at a 
> healthy pace. To begin with, the builder API has been improved (KIP-120). A 
> new API has been added to expose the state of active tasks at runtime 
> (KIP-130). The new cogroup API makes it much easier to deal with partitioned 
> aggregates with fewer StateStores and fewer moving parts in your code 
> (KIP-150). Debuggability gets easier with enhancements to the print() and 
> writeAsText() methods (KIP-160). And if that’s not enough, check out KIP-138 
> and KIP-161 too. For more on streams, check out the Apache Kafka Streams 
> documentation, including some helpful new tutorial videos.
> * Operating Kafka at scale requires that the system remain observable, and to 
> make that easier, we’ve made a number of improvements to metrics. These are 
> too many to summarize without becoming tedious, but Connect metrics have been 
> significantly improved (KIP-196), a litany of new health check metrics are 
> now exposed (KIP-188), and we now have a global topic and partition count 
> (KIP-168). Check out KIP-164 and KIP-187 for even more.
> * We now support Java 9, leading, among other things, to significantly faster 
> TLS and CRC32C implementations. Over-the-wire encryption will be faster now, 
> which will keep Kafka fast and compute costs low when encryption is enabled.
> * In keeping with the security theme, KIP-152 cleans up the error handling on 
> Simple Authentication Security Layer (SASL) authentication attempts. 
> Previously, some authentication error conditions were indistinguishable from 
> broker failures and were not logged in a clear way. This is cleaner now.
> * Kafka can now tolerate disk failures better. Historically, JBOD storage 
> configurations have not been recommended, but the architecture has 
> nevertheless been tempting: after all, why not rely on Kafka’s own 
> replication mechanism to protect against storage failure rather than using 
> RAID? With KIP-112, Kafka now handles disk failure more gracefully. A single 
> disk failure in a JBOD broker will not bring the entire broker down; rather, 
> the broker will continue serving any log files that remain on functioning 
> disks.
> * Since release 0.11.0, the idempotent producer (which is the producer used 
> in the presence of a transaction, which of course is the producer we use for 
> exactly-once processing) required max.in.flight.requests.per.connection to be 
> equal to one. As anyone who has written or tested a wire protocol can attest, 
> this put an upper bound on throughput. Thanks to KAFKA-5949, this can now be 
> as large as five, relaxing the throughput constraint quite a bit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10256) Port legacy jobmanager test to FILP-6

2018-08-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/FLINK-10256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598176#comment-16598176
 ] 

陈梓立 commented on FLINK-10256:
-

As [~till.rohrmann] suggest, prefer to describe this JIRA as Port legacy 
jobmanager test to FILP-6.
It is no goal to harden the legacy code but make sure we have the same 
test-cover cases in FILP-6 codebase

> Port legacy jobmanager test to FILP-6
> -
>
> Key: FLINK-10256
> URL: https://issues.apache.org/jira/browse/FLINK-10256
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: 陈梓立
>Assignee: 陈梓立
>Priority: Major
> Fix For: 1.7.0
>
>
> I am planning to rework JobManagerFailsITCase and JobManagerTest into 
> JobMasterITCase and JobMasterHAITCase. That is, reorganize the legacy tests, 
> make them neat and cover cases explicitly. The PR would follow before this 
> weekend.
> While reworking, I'd like to add more jm failover test cases list below, for 
> the further implement of jm failover with RECONCILING state. For "jm 
> failover", I mean a real world failover(like low power or process exit), 
> without calling Flink internal postStop logic or something like it.
> 1. Streaming task with jm failover.
> 2. Streaming task with jm failover concurrent to task fail.
> 3. Batch task with jm failover.
> 4. Batch task with jm failover concurrent to task fail.
> 5. Batch task with jm failover when some vertex has already been FINISHED.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 connectors

2018-08-30 Thread GitBox
yanghua commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 
connectors
URL: https://github.com/apache/flink/pull/6577#issuecomment-417538633
 
 
   @eliaslevy Things may not be as simple as we are not just using kafka's 
Public API, but also some non-public APIs and using different Kafka internal 
fields through reflection. These codes will change with different kafka 
clients. If we only maintain a connector, then this connector will be filled 
with the special case handling of the entire kafka client, and if/else 
statements are everywhere, it may be more difficult to maintain.
   
   Maybe you can look at these 
[links](https://github.com/apache/flink/pull/6577#issuecomment-414951008) 
inside.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10256) Port legacy jobmanager test to FILP-6

2018-08-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/FLINK-10256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

陈梓立 updated FLINK-10256:

Summary: Port legacy jobmanager test to FILP-6  (was: Rework 
JobManagerFailsITCase and JobManagerTest into JobMasterITCase and 
JobMasterHAITCase)

> Port legacy jobmanager test to FILP-6
> -
>
> Key: FLINK-10256
> URL: https://issues.apache.org/jira/browse/FLINK-10256
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: 陈梓立
>Assignee: 陈梓立
>Priority: Major
> Fix For: 1.7.0
>
>
> I am planning to rework JobManagerFailsITCase and JobManagerTest into 
> JobMasterITCase and JobMasterHAITCase. That is, reorganize the legacy tests, 
> make them neat and cover cases explicitly. The PR would follow before this 
> weekend.
> While reworking, I'd like to add more jm failover test cases list below, for 
> the further implement of jm failover with RECONCILING state. For "jm 
> failover", I mean a real world failover(like low power or process exit), 
> without calling Flink internal postStop logic or something like it.
> 1. Streaming task with jm failover.
> 2. Streaming task with jm failover concurrent to task fail.
> 3. Batch task with jm failover.
> 4. Batch task with jm failover concurrent to task fail.
> 5. Batch task with jm failover when some vertex has already been FINISHED.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9363) Bump up the Jackson version

2018-08-30 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9363:
--
Description: 
CVE's for Jackson :

CVE-2017-17485
CVE-2018-5968
CVE-2018-7489

We can upgrade to 2.9.5

  was:
CVE's for Jackson:

CVE-2017-17485
CVE-2018-5968
CVE-2018-7489

We can upgrade to 2.9.5


> Bump up the Jackson version
> ---
>
> Key: FLINK-9363
> URL: https://issues.apache.org/jira/browse/FLINK-9363
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>  Labels: security
>
> CVE's for Jackson :
> CVE-2017-17485
> CVE-2018-5968
> CVE-2018-7489
> We can upgrade to 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9924) Upgrade zookeeper to 3.4.13

2018-08-30 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9924:
--
Description: 
zookeeper 3.4.13 is being released.

ZOOKEEPER-2959 fixes data loss when observer is used

ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / 
cloud) environment

  was:
zookeeper 3.4.13 is being released.

ZOOKEEPER-2959 fixes data loss when observer is used
ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / 
cloud) environment


> Upgrade zookeeper to 3.4.13
> ---
>
> Key: FLINK-9924
> URL: https://issues.apache.org/jira/browse/FLINK-9924
> Project: Flink
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>
> zookeeper 3.4.13 is being released.
> ZOOKEEPER-2959 fixes data loss when observer is used
> ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container 
> / cloud) environment



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10120) Support string representation for types like array

2018-08-30 Thread Jun Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Zhang closed FLINK-10120.
-
  Resolution: Duplicate
Release Note: Duplicated by FLINK-10170

> Support string representation for types like array
> --
>
> Key: FLINK-10120
> URL: https://issues.apache.org/jira/browse/FLINK-10120
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL, Type Serialization System
>Affects Versions: 1.6.1, 1.7.0
>Reporter: Jiayi Liao
>Assignee: Jun Zhang
>Priority: Minor
> Fix For: 1.6.0
>
>
> In TypeStringUtils.readTypeInfo:
> {code:java}
> case _: BasicArrayTypeInfo[_, _] | _: ObjectArrayTypeInfo[_, _] |
>  _: PrimitiveArrayTypeInfo[_] =>
>   throw new TableException("A string representation for array types is 
> not supported yet.")
> {code}
> This exception makes us unable to create a table schema or format schema with 
> a array type field.
> I'm not sure whether this is an improvement or not, because you throw an 
> exception explicitly here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10170) Support string representation for map and array types in descriptor-based Table API

2018-08-30 Thread Jun Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598120#comment-16598120
 ] 

Jun Zhang commented on FLINK-10170:
---

Ready to be merged, [~fhueske] Could you please have a look?

> Support string representation for map and array types in descriptor-based 
> Table API
> ---
>
> Key: FLINK-10170
> URL: https://issues.apache.org/jira/browse/FLINK-10170
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.6.1, 1.7.0
>Reporter: Jun Zhang
>Assignee: Jun Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
> Attachments: flink-10170
>
>
> Since 1.6 the recommended way of creating source/sink table is using 
> connector/format/schema/ descriptors. However, when declaring map types in 
> the schema descriptor, the following exception would be thrown:
> {quote}org.apache.flink.table.api.TableException: A string representation for 
> map types is not supported yet.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10120) Support string representation for types like array

2018-08-30 Thread Jun Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598119#comment-16598119
 ] 

Jun Zhang commented on FLINK-10120:
---

Issue resolved, please refer to FLINK-10170.

> Support string representation for types like array
> --
>
> Key: FLINK-10120
> URL: https://issues.apache.org/jira/browse/FLINK-10120
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL, Type Serialization System
>Affects Versions: 1.6.1, 1.7.0
>Reporter: Jiayi Liao
>Assignee: Jun Zhang
>Priority: Minor
> Fix For: 1.6.0
>
>
> In TypeStringUtils.readTypeInfo:
> {code:java}
> case _: BasicArrayTypeInfo[_, _] | _: ObjectArrayTypeInfo[_, _] |
>  _: PrimitiveArrayTypeInfo[_] =>
>   throw new TableException("A string representation for array types is 
> not supported yet.")
> {code}
> This exception makes us unable to create a table schema or format schema with 
> a array type field.
> I'm not sure whether this is an improvement or not, because you throw an 
> exception explicitly here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10061) Fix unsupported reconfiguration in KafkaTableSink

2018-08-30 Thread Jun Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Zhang closed FLINK-10061.
-
  Resolution: Unresolved
Release Note: Table.writeToSink() will be deprecated.

> Fix unsupported reconfiguration in KafkaTableSink
> -
>
> Key: FLINK-10061
> URL: https://issues.apache.org/jira/browse/FLINK-10061
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.6.0
>Reporter: Jun Zhang
>Assignee: Jun Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
>
> When using KafkaTableSink in "table.writeToSink(), the following exception is 
> thrown:
> {quote} java.lang.UnsupportedOperationException: Reconfiguration of this sink 
> is not supported.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] hequn8128 commented on issue #6631: [FLINK-10229][SQL CLIENT] Support listing of views

2018-08-30 Thread GitBox
hequn8128 commented on issue #6631: [FLINK-10229][SQL CLIENT] Support listing 
of views
URL: https://github.com/apache/flink/pull/6631#issuecomment-417516814
 
 
   @yanghua Thanks for your PR. I will take a look this weekend. :-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10229) Support listing of views

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598098#comment-16598098
 ] 

ASF GitHub Bot commented on FLINK-10229:


hequn8128 commented on issue #6631: [FLINK-10229][SQL CLIENT] Support listing 
of views
URL: https://github.com/apache/flink/pull/6631#issuecomment-417516814
 
 
   @yanghua Thanks for your PR. I will take a look this weekend. :-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support listing of views
> 
>
> Key: FLINK-10229
> URL: https://issues.apache.org/jira/browse/FLINK-10229
> Project: Flink
>  Issue Type: New Feature
>  Components: SQL Client
>Reporter: Timo Walther
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-10163 added initial support of views for the SQL Client. According to 
> other database vendors, views are listed in the \{{SHOW TABLES}}. However, 
> there should be a way of listing only the views. We can support the \{{SHOW 
> VIEWS}} command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-10141) Reduce lock contention introduced with 1.5

2018-08-30 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber resolved FLINK-10141.
-
   Resolution: Fixed
Fix Version/s: 1.5.4
   1.7.0
   1.6.1

Fixed via:
- master: 6c7603724be6c146cfc8019b77b60236d1d3d582
- release-1.6: b3dadd6fe3fa3e163ec9e25bff52c642e91f683f
- release-1.5: 0d5f75e9a005a32dcff1b545f146064bd6ce66e9

> Reduce lock contention introduced with 1.5
> --
>
> Key: FLINK-10141
> URL: https://issues.apache.org/jira/browse/FLINK-10141
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.6.0, 1.7.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0, 1.5.4
>
>
> With the changes around introducing credit-based flow control as well as the 
> low latency changes, unfortunately, we also introduced some lock contention 
> on {{RemoteInputChannel#bufferQueue}} and 
> {{RemoteInputChannel#receivedBuffers}} as well as asking for queue sizes when 
> the only thing we need is whether it is empty or not.
> This was observed as a high idle CPU load with no events in the stream but 
> only watermarks (every 500ms) and many slots on a single machine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-10142) Reduce synchronization overhead for credit notifications

2018-08-30 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber reopened FLINK-10142:
-

> Reduce synchronization overhead for credit notifications
> 
>
> Key: FLINK-10142
> URL: https://issues.apache.org/jira/browse/FLINK-10142
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.6.0, 1.7.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0, 1.5.4
>
>
> When credit-based flow control was introduced, we also added some checks and 
> optimisations for uncommon code paths that make common code paths 
> unnecessarily more expensive, e.g. checking whether a channel was released 
> before forwarding a credit notification to Netty. Such checks would have to 
> be confirmed by the Netty thread anyway and thus only add additional load for 
> something that happens only once (per channel).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-10142) Reduce synchronization overhead for credit notifications

2018-08-30 Thread Nico Kruber (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597926#comment-16597926
 ] 

Nico Kruber edited comment on FLINK-10142 at 8/30/18 9:07 PM:
--

Fixed via:
- master: c7ddb0a6cd4edfbf95471950c497e2592d4fdf8f
- release-1.6: 4cf197ede67dee9c4fbb41a4c5a8a61b40ddfa5d
- release-1.5: 9ae5009b6a82248bfae99dac088c1f6e285aa70f


was (Author: nicok):
Fixed via:
- master: 6c7603724be6c146cfc8019b77b60236d1d3d582
- release-1.6: 4cf197ede67dee9c4fbb41a4c5a8a61b40ddfa5d
- release-1.5: 9ae5009b6a82248bfae99dac088c1f6e285aa70f

> Reduce synchronization overhead for credit notifications
> 
>
> Key: FLINK-10142
> URL: https://issues.apache.org/jira/browse/FLINK-10142
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.6.0, 1.7.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0, 1.5.4
>
>
> When credit-based flow control was introduced, we also added some checks and 
> optimisations for uncommon code paths that make common code paths 
> unnecessarily more expensive, e.g. checking whether a channel was released 
> before forwarding a credit notification to Netty. Such checks would have to 
> be confirmed by the Netty thread anyway and thus only add additional load for 
> something that happens only once (per channel).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-10142) Reduce synchronization overhead for credit notifications

2018-08-30 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber resolved FLINK-10142.
-
Resolution: Fixed

> Reduce synchronization overhead for credit notifications
> 
>
> Key: FLINK-10142
> URL: https://issues.apache.org/jira/browse/FLINK-10142
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.6.0, 1.7.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0, 1.5.4
>
>
> When credit-based flow control was introduced, we also added some checks and 
> optimisations for uncommon code paths that make common code paths 
> unnecessarily more expensive, e.g. checking whether a channel was released 
> before forwarding a credit notification to Netty. Such checks would have to 
> be confirmed by the Netty thread anyway and thus only add additional load for 
> something that happens only once (per channel).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-10142) Reduce synchronization overhead for credit notifications

2018-08-30 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber resolved FLINK-10142.
-
   Resolution: Fixed
Fix Version/s: 1.5.4
   1.7.0
   1.6.1

Fixed via:
- master: 6c7603724be6c146cfc8019b77b60236d1d3d582
- release-1.6: 4cf197ede67dee9c4fbb41a4c5a8a61b40ddfa5d
- release-1.5: 9ae5009b6a82248bfae99dac088c1f6e285aa70f

> Reduce synchronization overhead for credit notifications
> 
>
> Key: FLINK-10142
> URL: https://issues.apache.org/jira/browse/FLINK-10142
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.6.0, 1.7.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0, 1.5.4
>
>
> When credit-based flow control was introduced, we also added some checks and 
> optimisations for uncommon code paths that make common code paths 
> unnecessarily more expensive, e.g. checking whether a channel was released 
> before forwarding a credit notification to Netty. Such checks would have to 
> be confirmed by the Netty thread anyway and thus only add additional load for 
> something that happens only once (per channel).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10142) Reduce synchronization overhead for credit notifications

2018-08-30 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-10142:

Affects Version/s: 1.5.0
   1.5.1
   1.5.3

> Reduce synchronization overhead for credit notifications
> 
>
> Key: FLINK-10142
> URL: https://issues.apache.org/jira/browse/FLINK-10142
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.6.0, 1.7.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
>
> When credit-based flow control was introduced, we also added some checks and 
> optimisations for uncommon code paths that make common code paths 
> unnecessarily more expensive, e.g. checking whether a channel was released 
> before forwarding a credit notification to Netty. Such checks would have to 
> be confirmed by the Netty thread anyway and thus only add additional load for 
> something that happens only once (per channel).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10141) Reduce lock contention introduced with 1.5

2018-08-30 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-10141:

Affects Version/s: 1.5.0
   1.5.1
   1.5.3

> Reduce lock contention introduced with 1.5
> --
>
> Key: FLINK-10141
> URL: https://issues.apache.org/jira/browse/FLINK-10141
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.6.0, 1.7.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
>
> With the changes around introducing credit-based flow control as well as the 
> low latency changes, unfortunately, we also introduced some lock contention 
> on {{RemoteInputChannel#bufferQueue}} and 
> {{RemoteInputChannel#receivedBuffers}} as well as asking for queue sizes when 
> the only thing we need is whether it is empty or not.
> This was observed as a high idle CPU load with no events in the stream but 
> only watermarks (every 500ms) and many slots on a single machine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] NicoK closed pull request #6553: [FLINK-10141][network] optimisations reducing lock contention

2018-08-30 Thread GitBox
NicoK closed pull request #6553: [FLINK-10141][network] optimisations reducing 
lock contention
URL: https://github.com/apache/flink/pull/6553
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
index 28f30209892..c4954c0fb6d 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
@@ -191,16 +191,16 @@ void retriggerSubpartitionRequest(int subpartitionIndex) 
throws IOException, Int
checkError();
 
final Buffer next;
-   final int remaining;
+   final boolean moreAvailable;
 
synchronized (receivedBuffers) {
next = receivedBuffers.poll();
-   remaining = receivedBuffers.size();
+   moreAvailable = !receivedBuffers.isEmpty();
}
 
numBytesIn.inc(next.getSizeUnsafe());
numBuffersIn.inc();
-   return Optional.of(new BufferAndAvailability(next, remaining > 
0, getSenderBacklog()));
+   return Optional.of(new BufferAndAvailability(next, 
moreAvailable, getSenderBacklog()));
}
 
// 

@@ -306,8 +306,8 @@ public void recycle(MemorySegment segment) {
int numAddedBuffers;
 
synchronized (bufferQueue) {
-   // Important: check the isReleased state inside 
synchronized block, so there is no
-   // race condition when recycle and releaseAllResources 
running in parallel.
+   // Similar to notifyBufferAvailable(), make sure that 
we never add a buffer
+   // after releaseAllResources() released all buffers 
(see below for details).
if (isReleased.get()) {
try {

inputGate.returnExclusiveSegments(Collections.singletonList(segment));
@@ -368,8 +368,13 @@ public boolean notifyBufferAvailable(Buffer buffer) {
checkState(isWaitingForFloatingBuffers,
"This channel should be waiting for 
floating buffers.");
 
-   // Important: double check the isReleased state 
inside synchronized block, so there is no
-   // race condition when notifyBufferAvailable 
and releaseAllResources running in parallel.
+   // Important: make sure that we never add a 
buffer after releaseAllResources()
+   // released all buffers. Following scenarios 
exist:
+   // 1) releaseAllResources() already released 
buffers inside bufferQueue
+   // -> then isReleased is set correctly
+   // 2) releaseAllResources() did not yet release 
buffers from bufferQueue
+   // -> we may or may not have set isReleased yet 
but will always wait for the
+   //lock on bufferQueue to release buffers
if (isReleased.get() || 
bufferQueue.getAvailableBufferSize() >= numRequiredBuffers) {
isWaitingForFloatingBuffers = false;
recycleBuffer = false; // just in case
@@ -385,10 +390,10 @@ public boolean notifyBufferAvailable(Buffer buffer) {
} else {
needMoreBuffers = true;
}
+   }
 
-   if (unannouncedCredit.getAndAdd(1) == 0) {
-   notifyCreditAvailable();
-   }
+   if (unannouncedCredit.getAndAdd(1) == 0) {
+   notifyCreditAvailable();
}
 
return needMoreBuffers;
@@ -484,8 +489,8 @@ void onSenderBacklog(int backlog) throws IOException {
int numRequestedBuffers = 0;
 
synchronized (bufferQueue) {
-   // Important: check the isReleased state inside 
synchronized block, so there is no
-   // race condition when onSenderBacklog and 
releaseAllResources 

[jira] [Commented] (FLINK-10141) Reduce lock contention introduced with 1.5

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597923#comment-16597923
 ] 

ASF GitHub Bot commented on FLINK-10141:


NicoK closed pull request #6553: [FLINK-10141][network] optimisations reducing 
lock contention
URL: https://github.com/apache/flink/pull/6553
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
index 28f30209892..c4954c0fb6d 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
@@ -191,16 +191,16 @@ void retriggerSubpartitionRequest(int subpartitionIndex) 
throws IOException, Int
checkError();
 
final Buffer next;
-   final int remaining;
+   final boolean moreAvailable;
 
synchronized (receivedBuffers) {
next = receivedBuffers.poll();
-   remaining = receivedBuffers.size();
+   moreAvailable = !receivedBuffers.isEmpty();
}
 
numBytesIn.inc(next.getSizeUnsafe());
numBuffersIn.inc();
-   return Optional.of(new BufferAndAvailability(next, remaining > 
0, getSenderBacklog()));
+   return Optional.of(new BufferAndAvailability(next, 
moreAvailable, getSenderBacklog()));
}
 
// 

@@ -306,8 +306,8 @@ public void recycle(MemorySegment segment) {
int numAddedBuffers;
 
synchronized (bufferQueue) {
-   // Important: check the isReleased state inside 
synchronized block, so there is no
-   // race condition when recycle and releaseAllResources 
running in parallel.
+   // Similar to notifyBufferAvailable(), make sure that 
we never add a buffer
+   // after releaseAllResources() released all buffers 
(see below for details).
if (isReleased.get()) {
try {

inputGate.returnExclusiveSegments(Collections.singletonList(segment));
@@ -368,8 +368,13 @@ public boolean notifyBufferAvailable(Buffer buffer) {
checkState(isWaitingForFloatingBuffers,
"This channel should be waiting for 
floating buffers.");
 
-   // Important: double check the isReleased state 
inside synchronized block, so there is no
-   // race condition when notifyBufferAvailable 
and releaseAllResources running in parallel.
+   // Important: make sure that we never add a 
buffer after releaseAllResources()
+   // released all buffers. Following scenarios 
exist:
+   // 1) releaseAllResources() already released 
buffers inside bufferQueue
+   // -> then isReleased is set correctly
+   // 2) releaseAllResources() did not yet release 
buffers from bufferQueue
+   // -> we may or may not have set isReleased yet 
but will always wait for the
+   //lock on bufferQueue to release buffers
if (isReleased.get() || 
bufferQueue.getAvailableBufferSize() >= numRequiredBuffers) {
isWaitingForFloatingBuffers = false;
recycleBuffer = false; // just in case
@@ -385,10 +390,10 @@ public boolean notifyBufferAvailable(Buffer buffer) {
} else {
needMoreBuffers = true;
}
+   }
 
-   if (unannouncedCredit.getAndAdd(1) == 0) {
-   notifyCreditAvailable();
-   }
+   if (unannouncedCredit.getAndAdd(1) == 0) {
+   notifyCreditAvailable();
}
 
return needMoreBuffers;
@@ -484,8 +489,8 @@ void onSenderBacklog(int backlog) throws IOException {
int numRequestedBuffers = 0;
 
  

[jira] [Commented] (FLINK-10142) Reduce synchronization overhead for credit notifications

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597913#comment-16597913
 ] 

ASF GitHub Bot commented on FLINK-10142:


NicoK closed pull request #6555: [FLINK-10142][network] reduce locking around 
credit notification
URL: https://github.com/apache/flink/pull/6555
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/PartitionRequestClient.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/PartitionRequestClient.java
index 27d341aca40..9c9deaa2542 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/PartitionRequestClient.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/PartitionRequestClient.java
@@ -171,10 +171,7 @@ public void operationComplete(ChannelFuture future) throws 
Exception {
}
 
public void notifyCreditAvailable(RemoteInputChannel inputChannel) {
-   // We should skip the notification if the client is already 
closed.
-   if (!closeReferenceCounter.isDisposed()) {
-   clientHandler.notifyCreditAvailable(inputChannel);
-   }
+   clientHandler.notifyCreditAvailable(inputChannel);
}
 
public void close(RemoteInputChannel inputChannel) throws IOException {
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
index 28f30209892..6738abd7f9c 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
@@ -191,16 +191,16 @@ void retriggerSubpartitionRequest(int subpartitionIndex) 
throws IOException, Int
checkError();
 
final Buffer next;
-   final int remaining;
+   final boolean moreAvailable;
 
synchronized (receivedBuffers) {
next = receivedBuffers.poll();
-   remaining = receivedBuffers.size();
+   moreAvailable = !receivedBuffers.isEmpty();
}
 
numBytesIn.inc(next.getSizeUnsafe());
numBuffersIn.inc();
-   return Optional.of(new BufferAndAvailability(next, remaining > 
0, getSenderBacklog()));
+   return Optional.of(new BufferAndAvailability(next, 
moreAvailable, getSenderBacklog()));
}
 
// 

@@ -289,10 +289,7 @@ public String toString() {
private void notifyCreditAvailable() {
checkState(partitionRequestClient != null, "Tried to send task 
event to producer before requesting a queue.");
 
-   // We should skip the notification if this channel is already 
released.
-   if (!isReleased.get()) {
-   partitionRequestClient.notifyCreditAvailable(this);
-   }
+   partitionRequestClient.notifyCreditAvailable(this);
}
 
/**
@@ -306,8 +303,8 @@ public void recycle(MemorySegment segment) {
int numAddedBuffers;
 
synchronized (bufferQueue) {
-   // Important: check the isReleased state inside 
synchronized block, so there is no
-   // race condition when recycle and releaseAllResources 
running in parallel.
+   // Similar to notifyBufferAvailable(), make sure that 
we never add a buffer
+   // after releaseAllResources() released all buffers 
(see below for details).
if (isReleased.get()) {
try {

inputGate.returnExclusiveSegments(Collections.singletonList(segment));
@@ -354,13 +351,6 @@ boolean isWaitingForFloatingBuffers() {
 */
@Override
public boolean notifyBufferAvailable(Buffer buffer) {
-   // Check the isReleased state outside synchronized block first 
to avoid
-   // deadlock with releaseAllResources running in parallel.
-   if (isReleased.get()) {
-   buffer.recycleBuffer();
-   return false;
-   }
-
boolean recycleBuffer = true;
try {
boolean needMoreBuffers = false;
@@ 

[GitHub] NicoK closed pull request #6555: [FLINK-10142][network] reduce locking around credit notification

2018-08-30 Thread GitBox
NicoK closed pull request #6555: [FLINK-10142][network] reduce locking around 
credit notification
URL: https://github.com/apache/flink/pull/6555
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/PartitionRequestClient.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/PartitionRequestClient.java
index 27d341aca40..9c9deaa2542 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/PartitionRequestClient.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/PartitionRequestClient.java
@@ -171,10 +171,7 @@ public void operationComplete(ChannelFuture future) throws 
Exception {
}
 
public void notifyCreditAvailable(RemoteInputChannel inputChannel) {
-   // We should skip the notification if the client is already 
closed.
-   if (!closeReferenceCounter.isDisposed()) {
-   clientHandler.notifyCreditAvailable(inputChannel);
-   }
+   clientHandler.notifyCreditAvailable(inputChannel);
}
 
public void close(RemoteInputChannel inputChannel) throws IOException {
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
index 28f30209892..6738abd7f9c 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
@@ -191,16 +191,16 @@ void retriggerSubpartitionRequest(int subpartitionIndex) 
throws IOException, Int
checkError();
 
final Buffer next;
-   final int remaining;
+   final boolean moreAvailable;
 
synchronized (receivedBuffers) {
next = receivedBuffers.poll();
-   remaining = receivedBuffers.size();
+   moreAvailable = !receivedBuffers.isEmpty();
}
 
numBytesIn.inc(next.getSizeUnsafe());
numBuffersIn.inc();
-   return Optional.of(new BufferAndAvailability(next, remaining > 
0, getSenderBacklog()));
+   return Optional.of(new BufferAndAvailability(next, 
moreAvailable, getSenderBacklog()));
}
 
// 

@@ -289,10 +289,7 @@ public String toString() {
private void notifyCreditAvailable() {
checkState(partitionRequestClient != null, "Tried to send task 
event to producer before requesting a queue.");
 
-   // We should skip the notification if this channel is already 
released.
-   if (!isReleased.get()) {
-   partitionRequestClient.notifyCreditAvailable(this);
-   }
+   partitionRequestClient.notifyCreditAvailable(this);
}
 
/**
@@ -306,8 +303,8 @@ public void recycle(MemorySegment segment) {
int numAddedBuffers;
 
synchronized (bufferQueue) {
-   // Important: check the isReleased state inside 
synchronized block, so there is no
-   // race condition when recycle and releaseAllResources 
running in parallel.
+   // Similar to notifyBufferAvailable(), make sure that 
we never add a buffer
+   // after releaseAllResources() released all buffers 
(see below for details).
if (isReleased.get()) {
try {

inputGate.returnExclusiveSegments(Collections.singletonList(segment));
@@ -354,13 +351,6 @@ boolean isWaitingForFloatingBuffers() {
 */
@Override
public boolean notifyBufferAvailable(Buffer buffer) {
-   // Check the isReleased state outside synchronized block first 
to avoid
-   // deadlock with releaseAllResources running in parallel.
-   if (isReleased.get()) {
-   buffer.recycleBuffer();
-   return false;
-   }
-
boolean recycleBuffer = true;
try {
boolean needMoreBuffers = false;
@@ -368,8 +358,13 @@ public boolean notifyBufferAvailable(Buffer buffer) {
checkState(isWaitingForFloatingBuffers,
"This channel should be waiting for 
floating buffers.");
 

[jira] [Created] (FLINK-10268) Document update deployment/aws HADOOP_CLASSPATH

2018-08-30 Thread Andy M (JIRA)
Andy M created FLINK-10268:
--

 Summary: Document update deployment/aws HADOOP_CLASSPATH
 Key: FLINK-10268
 URL: https://issues.apache.org/jira/browse/FLINK-10268
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Andy M


The Deployment/AWS/Custom EMR Installation documents need to be updated.  
Currently the steps will result in a ClassNotFoundException.  A step needs to 
be added to include setting HADOOP_CLASSPATH=`hadoop classpath`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10267) [State] Fix arbitrary iterator access on RocksDBMapIterator

2018-08-30 Thread Yun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang updated FLINK-10267:
-
Description: 
Currently, RocksDBMapIterator would load 128 entries into local cacheEntries 
every time if needed. Both RocksDBMapIterator#next() and 
RocksDBMapIterator#hasNext() action might trigger to load RocksDBEntry into 
cacheEntries.

However, if the iterator's size larger than 128 and we continue to access the 
iterator with following order: hasNext() -> next() -> hasNext() -> remove(), we 
would meet weird exception when we try to remove the 128th element:
{code:java}
java.lang.IllegalStateException: The remove operation must be called after a 
valid next operation.
{code}
Since we could not control user's access on iterator, we should fix this bug to 
avoid unexpected exception.

  was:
Currently, RocksDBMapIterator would load 128 entries into local cacheEntries. 
Both RocksDBMapIterator#next() and RocksDBMapIterator#hasNext() action would 
trigger to load RocksDBEntry into cacheEntries.

However, if the iterator's size larger than 128 and we continue to access the 
iterator with following order: hasNext() -> next() -> hasNext() -> remove(), we 
would meet weird exception when we try to remove the 128th element:
{code:java}
java.lang.IllegalStateException: The remove operation must be called after a 
valid next operation.
{code}
Since we could not control user's access on iterator, we should fix this bug to 
avoid unexpected exception.


> [State] Fix arbitrary iterator access on RocksDBMapIterator
> ---
>
> Key: FLINK-10267
> URL: https://issues.apache.org/jira/browse/FLINK-10267
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.3, 1.6.0
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.5.4
>
>
> Currently, RocksDBMapIterator would load 128 entries into local cacheEntries 
> every time if needed. Both RocksDBMapIterator#next() and 
> RocksDBMapIterator#hasNext() action might trigger to load RocksDBEntry into 
> cacheEntries.
> However, if the iterator's size larger than 128 and we continue to access the 
> iterator with following order: hasNext() -> next() -> hasNext() -> remove(), 
> we would meet weird exception when we try to remove the 128th element:
> {code:java}
> java.lang.IllegalStateException: The remove operation must be called after a 
> valid next operation.
> {code}
> Since we could not control user's access on iterator, we should fix this bug 
> to avoid unexpected exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10267) [State] Fix arbitrary iterator access on RocksDBMapIterator

2018-08-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10267:
---
Labels: pull-request-available  (was: )

> [State] Fix arbitrary iterator access on RocksDBMapIterator
> ---
>
> Key: FLINK-10267
> URL: https://issues.apache.org/jira/browse/FLINK-10267
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.3, 1.6.0
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.5.4
>
>
> Currently, RocksDBMapIterator would load 128 entries into local cacheEntries. 
> Both RocksDBMapIterator#next() and RocksDBMapIterator#hasNext() action would 
> trigger to load RocksDBEntry into cacheEntries.
> However, if the iterator's size larger than 128 and we continue to access the 
> iterator with following order: hasNext() -> next() -> hasNext() -> remove(), 
> we would meet weird exception when we try to remove the 128th element:
> {code:java}
> java.lang.IllegalStateException: The remove operation must be called after a 
> valid next operation.
> {code}
> Since we could not control user's access on iterator, we should fix this bug 
> to avoid unexpected exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10267) [State] Fix arbitrary iterator access on RocksDBMapIterator

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597719#comment-16597719
 ] 

ASF GitHub Bot commented on FLINK-10267:


Myasuka opened a new pull request #6638: [FLINK-10267][State] Fix arbitrary 
iterator access on RocksDBMapIterator
URL: https://github.com/apache/flink/pull/6638
 
 
   ## What is the purpose of the change
   
   This pull request fix the arbitrary iterator access on RocksDBMapIterator to 
avoid unexpected exception.
   
   ## Brief change log
   
   Fix the `RocksDBMapIterator#loadCache()` logical to add not-removed 
`lastEntry` as the first entry in `cacheEntries`. 
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
 - Added unit test for `StateBackendTestBase#testMapState()`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency):  no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): don't know
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [State] Fix arbitrary iterator access on RocksDBMapIterator
> ---
>
> Key: FLINK-10267
> URL: https://issues.apache.org/jira/browse/FLINK-10267
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.3, 1.6.0
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.5.4
>
>
> Currently, RocksDBMapIterator would load 128 entries into local cacheEntries. 
> Both RocksDBMapIterator#next() and RocksDBMapIterator#hasNext() action would 
> trigger to load RocksDBEntry into cacheEntries.
> However, if the iterator's size larger than 128 and we continue to access the 
> iterator with following order: hasNext() -> next() -> hasNext() -> remove(), 
> we would meet weird exception when we try to remove the 128th element:
> {code:java}
> java.lang.IllegalStateException: The remove operation must be called after a 
> valid next operation.
> {code}
> Since we could not control user's access on iterator, we should fix this bug 
> to avoid unexpected exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] Myasuka opened a new pull request #6638: [FLINK-10267][State] Fix arbitrary iterator access on RocksDBMapIterator

2018-08-30 Thread GitBox
Myasuka opened a new pull request #6638: [FLINK-10267][State] Fix arbitrary 
iterator access on RocksDBMapIterator
URL: https://github.com/apache/flink/pull/6638
 
 
   ## What is the purpose of the change
   
   This pull request fix the arbitrary iterator access on RocksDBMapIterator to 
avoid unexpected exception.
   
   ## Brief change log
   
   Fix the `RocksDBMapIterator#loadCache()` logical to add not-removed 
`lastEntry` as the first entry in `cacheEntries`. 
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
 - Added unit test for `StateBackendTestBase#testMapState()`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency):  no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): don't know
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10242) Disable latency metrics by default

2018-08-30 Thread Stephan Ewen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597714#comment-16597714
 ] 

Stephan Ewen commented on FLINK-10242:
--

How hard would it be to set the default in the flink-conf.yaml?
This seems like an option relevant to the ops side, to the config would be a 
good place.

> Disable latency metrics by default
> --
>
> Key: FLINK-10242
> URL: https://issues.apache.org/jira/browse/FLINK-10242
> Project: Flink
>  Issue Type: Sub-task
>  Components: Configuration, Metrics
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.7.0
>
>
> With the plethora of recent issue around latency metrics we should disable 
> them by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10242) Disable latency metrics by default

2018-08-30 Thread Stephan Ewen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597712#comment-16597712
 ] 

Stephan Ewen commented on FLINK-10242:
--

+1

> Disable latency metrics by default
> --
>
> Key: FLINK-10242
> URL: https://issues.apache.org/jira/browse/FLINK-10242
> Project: Flink
>  Issue Type: Sub-task
>  Components: Configuration, Metrics
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.7.0
>
>
> With the plethora of recent issue around latency metrics we should disable 
> them by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10267) [State] Fix arbitrary iterator access on RocksDBMapIterator

2018-08-30 Thread Yun Tang (JIRA)
Yun Tang created FLINK-10267:


 Summary: [State] Fix arbitrary iterator access on 
RocksDBMapIterator
 Key: FLINK-10267
 URL: https://issues.apache.org/jira/browse/FLINK-10267
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.6.0, 1.5.3
Reporter: Yun Tang
Assignee: Yun Tang
 Fix For: 1.6.1, 1.5.4


Currently, RocksDBMapIterator would load 128 entries into local cacheEntries. 
Both RocksDBMapIterator#next() and RocksDBMapIterator#hasNext() action would 
trigger to load RocksDBEntry into cacheEntries.

However, if the iterator's size larger than 128 and we continue to access the 
iterator with following order: hasNext() -> next() -> hasNext() -> remove(), we 
would meet weird exception when we try to remove the 128th element:
{code:java}
java.lang.IllegalStateException: The remove operation must be called after a 
valid next operation.
{code}
Since we could not control user's access on iterator, we should fix this bug to 
avoid unexpected exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7964) Add Apache Kafka 1.0/1.1 connectors

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597689#comment-16597689
 ] 

ASF GitHub Bot commented on FLINK-7964:
---

eliaslevy commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 
connectors
URL: https://github.com/apache/flink/pull/6577#issuecomment-417394653
 
 
   @yanghua that comment by @pnowojski is not applicable to my question, 
because I am not suggesting that we upgrade the Kafka dependency without 
upgrading the connector, nor that we don't support newer Kafka client versions. 
 I am suggesting we have a single Kafka connector (except for 0.9 and 0.8) that 
always tracks the latest Kafka client version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Apache Kafka 1.0/1.1 connectors
> ---
>
> Key: FLINK-7964
> URL: https://issues.apache.org/jira/browse/FLINK-7964
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Kafka 1.0.0 is no mere bump of the version number. The Apache Kafka Project 
> Management Committee has packed a number of valuable enhancements into the 
> release. Here is a summary of a few of them:
> * Since its introduction in version 0.10, the Streams API has become hugely 
> popular among Kafka users, including the likes of Pinterest, Rabobank, 
> Zalando, and The New York Times. In 1.0, the the API continues to evolve at a 
> healthy pace. To begin with, the builder API has been improved (KIP-120). A 
> new API has been added to expose the state of active tasks at runtime 
> (KIP-130). The new cogroup API makes it much easier to deal with partitioned 
> aggregates with fewer StateStores and fewer moving parts in your code 
> (KIP-150). Debuggability gets easier with enhancements to the print() and 
> writeAsText() methods (KIP-160). And if that’s not enough, check out KIP-138 
> and KIP-161 too. For more on streams, check out the Apache Kafka Streams 
> documentation, including some helpful new tutorial videos.
> * Operating Kafka at scale requires that the system remain observable, and to 
> make that easier, we’ve made a number of improvements to metrics. These are 
> too many to summarize without becoming tedious, but Connect metrics have been 
> significantly improved (KIP-196), a litany of new health check metrics are 
> now exposed (KIP-188), and we now have a global topic and partition count 
> (KIP-168). Check out KIP-164 and KIP-187 for even more.
> * We now support Java 9, leading, among other things, to significantly faster 
> TLS and CRC32C implementations. Over-the-wire encryption will be faster now, 
> which will keep Kafka fast and compute costs low when encryption is enabled.
> * In keeping with the security theme, KIP-152 cleans up the error handling on 
> Simple Authentication Security Layer (SASL) authentication attempts. 
> Previously, some authentication error conditions were indistinguishable from 
> broker failures and were not logged in a clear way. This is cleaner now.
> * Kafka can now tolerate disk failures better. Historically, JBOD storage 
> configurations have not been recommended, but the architecture has 
> nevertheless been tempting: after all, why not rely on Kafka’s own 
> replication mechanism to protect against storage failure rather than using 
> RAID? With KIP-112, Kafka now handles disk failure more gracefully. A single 
> disk failure in a JBOD broker will not bring the entire broker down; rather, 
> the broker will continue serving any log files that remain on functioning 
> disks.
> * Since release 0.11.0, the idempotent producer (which is the producer used 
> in the presence of a transaction, which of course is the producer we use for 
> exactly-once processing) required max.in.flight.requests.per.connection to be 
> equal to one. As anyone who has written or tested a wire protocol can attest, 
> this put an upper bound on throughput. Thanks to KAFKA-5949, this can now be 
> as large as five, relaxing the throughput constraint quite a bit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] eliaslevy commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 connectors

2018-08-30 Thread GitBox
eliaslevy commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 
connectors
URL: https://github.com/apache/flink/pull/6577#issuecomment-417394653
 
 
   @yanghua that comment by @pnowojski is not applicable to my question, 
because I am not suggesting that we upgrade the Kafka dependency without 
upgrading the connector, nor that we don't support newer Kafka client versions. 
 I am suggesting we have a single Kafka connector (except for 0.9 and 0.8) that 
always tracks the latest Kafka client version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-10266) Web UI has outdated entry about externalized checkpoints

2018-08-30 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-10266:


 Summary: Web UI has outdated entry about externalized checkpoints
 Key: FLINK-10266
 URL: https://issues.apache.org/jira/browse/FLINK-10266
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 1.6.0, 1.5.3
Reporter: Stephan Ewen


The web UI says "Persist Checkpoints Externally Disabled" even though starting 
from 1.5.0 all checkpoints are always externally addressable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7964) Add Apache Kafka 1.0/1.1 connectors

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597662#comment-16597662
 ] 

ASF GitHub Bot commented on FLINK-7964:
---

eliaslevy commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 
connectors
URL: https://github.com/apache/flink/pull/6577#issuecomment-417387441
 
 
   @aljoscha 
   * [KIP-35 - Retrieving protocol 
version](https://cwiki-test.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version)
   * [KIP-97: Improved Kafka Client RPC Compatibility 
Policy](https://cwiki-test.apache.org/confluence/display/KAFKA/KIP-97%3A+Improved+Kafka+Client+RPC+Compatibility+Policy)
   
   > This KIP suggests revising the Kafka Java client compatibility policy so 
that it is two-way, rather than one-way.  Not only should older brokers support 
newer clients, but newer clients should support older brokers.
   
   > In some cases, older brokers will be unable to perform a feature supported 
by newer clients.  For example, retrieving message offsets from a message 
timestamp (KIP-79) is not supported in versions before 0.10.1.  In these cases, 
the new client will throw an ObsoleteBrokerVersion exception when attempting to 
perform the operation.  This will let clients know that the reason why the 
operation cannot be performed is because the broker version in use does not 
support it.  At the same time, clients which do not attempt to use the new 
operations will proceed as before.
   
   * [Compatibility 
Matrix](https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix)
   
   Basically, any KIP-35 client can talk to a Kafka broker >= 0.10.0, which for 
the Java client 
[means](https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version)
 versions >= 0.10.2.
   
   @pnowojski I am suggesting that, except possibly for 0.8 and 0.9 clients, 
there is only a need for a single Kafka connector based on the latest Kafka 
client (2.0.0 as of now).  There is no need for separate 2.0.0, 1.x.y, 
0.11.0.x, and 0.10.x.y connectors. A connector based on 2.0.0 can talk to all 
those brokers.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Apache Kafka 1.0/1.1 connectors
> ---
>
> Key: FLINK-7964
> URL: https://issues.apache.org/jira/browse/FLINK-7964
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Kafka 1.0.0 is no mere bump of the version number. The Apache Kafka Project 
> Management Committee has packed a number of valuable enhancements into the 
> release. Here is a summary of a few of them:
> * Since its introduction in version 0.10, the Streams API has become hugely 
> popular among Kafka users, including the likes of Pinterest, Rabobank, 
> Zalando, and The New York Times. In 1.0, the the API continues to evolve at a 
> healthy pace. To begin with, the builder API has been improved (KIP-120). A 
> new API has been added to expose the state of active tasks at runtime 
> (KIP-130). The new cogroup API makes it much easier to deal with partitioned 
> aggregates with fewer StateStores and fewer moving parts in your code 
> (KIP-150). Debuggability gets easier with enhancements to the print() and 
> writeAsText() methods (KIP-160). And if that’s not enough, check out KIP-138 
> and KIP-161 too. For more on streams, check out the Apache Kafka Streams 
> documentation, including some helpful new tutorial videos.
> * Operating Kafka at scale requires that the system remain observable, and to 
> make that easier, we’ve made a number of improvements to metrics. These are 
> too many to summarize without becoming tedious, but Connect metrics have been 
> significantly improved (KIP-196), a litany of new health check metrics are 
> now exposed (KIP-188), and we now have a global topic and partition count 
> (KIP-168). Check out KIP-164 and KIP-187 for even more.
> * We now support Java 9, leading, among other things, to significantly faster 
> TLS and CRC32C implementations. Over-the-wire encryption will be faster now, 
> which will keep Kafka fast and compute costs low when encryption is enabled.
> * In keeping with the security theme, KIP-152 cleans up the error handling on 
> Simple Authentication Security Layer (SASL) authentication attempts. 
> Previously, some authentication error conditions were indistinguishable from 
> broker failures and were not 

[GitHub] eliaslevy commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 connectors

2018-08-30 Thread GitBox
eliaslevy commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 
connectors
URL: https://github.com/apache/flink/pull/6577#issuecomment-417387441
 
 
   @aljoscha 
   * [KIP-35 - Retrieving protocol 
version](https://cwiki-test.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version)
   * [KIP-97: Improved Kafka Client RPC Compatibility 
Policy](https://cwiki-test.apache.org/confluence/display/KAFKA/KIP-97%3A+Improved+Kafka+Client+RPC+Compatibility+Policy)
   
   > This KIP suggests revising the Kafka Java client compatibility policy so 
that it is two-way, rather than one-way.  Not only should older brokers support 
newer clients, but newer clients should support older brokers.
   
   > In some cases, older brokers will be unable to perform a feature supported 
by newer clients.  For example, retrieving message offsets from a message 
timestamp (KIP-79) is not supported in versions before 0.10.1.  In these cases, 
the new client will throw an ObsoleteBrokerVersion exception when attempting to 
perform the operation.  This will let clients know that the reason why the 
operation cannot be performed is because the broker version in use does not 
support it.  At the same time, clients which do not attempt to use the new 
operations will proceed as before.
   
   * [Compatibility 
Matrix](https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix)
   
   Basically, any KIP-35 client can talk to a Kafka broker >= 0.10.0, which for 
the Java client 
[means](https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version)
 versions >= 0.10.2.
   
   @pnowojski I am suggesting that, except possibly for 0.8 and 0.9 clients, 
there is only a need for a single Kafka connector based on the latest Kafka 
client (2.0.0 as of now).  There is no need for separate 2.0.0, 1.x.y, 
0.11.0.x, and 0.10.x.y connectors. A connector based on 2.0.0 can talk to all 
those brokers.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10265) Configure checkpointing behavior for SQL Client

2018-08-30 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-10265:
-
Description: 
The SQL Client environment file should expose checkpointing related properties:

- enable checkpointing
- checkpointing interval
- mode
- timeout
- etc. see {{org.apache.flink.streaming.api.environment.CheckpointConfig}}

Per-job selection of state backends and their configuration.


  was:
The SQL Client environment file expose checkpointing related properties:

- enable checkpointing
- checkpointing interval
- mode
- timeout
- etc. see {{org.apache.flink.streaming.api.environment.CheckpointConfig}}

Per-job selection of state backends and their configuration.



> Configure checkpointing behavior for SQL Client
> ---
>
> Key: FLINK-10265
> URL: https://issues.apache.org/jira/browse/FLINK-10265
> Project: Flink
>  Issue Type: New Feature
>  Components: SQL Client
>Reporter: Timo Walther
>Priority: Major
>
> The SQL Client environment file should expose checkpointing related 
> properties:
> - enable checkpointing
> - checkpointing interval
> - mode
> - timeout
> - etc. see {{org.apache.flink.streaming.api.environment.CheckpointConfig}}
> Per-job selection of state backends and their configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10265) Configure checkpointing behavior for SQL Client

2018-08-30 Thread Timo Walther (JIRA)
Timo Walther created FLINK-10265:


 Summary: Configure checkpointing behavior for SQL Client
 Key: FLINK-10265
 URL: https://issues.apache.org/jira/browse/FLINK-10265
 Project: Flink
  Issue Type: New Feature
  Components: SQL Client
Reporter: Timo Walther


The SQL Client environment file expose checkpointing related properties:

- enable checkpointing
- checkpointing interval
- mode
- timeout
- etc. see {{org.apache.flink.streaming.api.environment.CheckpointConfig}}

Per-job selection of state backends and their configuration.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10264) Exception is misleading when no default constructor define for events

2018-08-30 Thread Swapan Shaw (JIRA)
Swapan Shaw created FLINK-10264:
---

 Summary: Exception is misleading when no default constructor 
define for events
 Key: FLINK-10264
 URL: https://issues.apache.org/jira/browse/FLINK-10264
 Project: Flink
  Issue Type: Bug
  Components: DataStream API
Reporter: Swapan Shaw


When I have not declare the default constructor in the  RequestEvent then I got 
the below exception which is misleading.

 
{code:java}
DataStream inputStream = env.addSource(...)
tEnv.registerDataStream(...);
{code}
 

 
{code:java}
Exception in thread "main" org.apache.flink.table.api.TableException: Only the 
first field can reference an atomic type.
 at 
org.apache.flink.table.api.TableEnvironment$$anonfun$5.apply(TableEnvironment.scala:996)
 at 
org.apache.flink.table.api.TableEnvironment$$anonfun$5.apply(TableEnvironment.scala:991)
 at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
 at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
 at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
 at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
 at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
 at 
org.apache.flink.table.api.TableEnvironment.getFieldInfo(TableEnvironment.scala:991)
 at 
org.apache.flink.table.api.StreamTableEnvironment.registerDataStreamInternal(StreamTableEnvironment.scala:546)
 at 
org.apache.flink.table.api.java.StreamTableEnvironment.registerDataStream(StreamTableEnvironment.scala:133)}}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-8951) Support OVER windows PARTITION BY (rounded) timestamp

2018-08-30 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597605#comment-16597605
 ] 

Hequn Cheng edited comment on FLINK-8951 at 8/30/18 4:02 PM:
-

[~sandro19] Hi, thanks for contributing to Flink. Do you have any plans? How 
would you like to implement it?


was (Author: hequn8128):
[~sandro19] Hi, do you have any plans. How would you like to implement it.

> Support OVER windows PARTITION BY (rounded) timestamp
> -
>
> Key: FLINK-8951
> URL: https://issues.apache.org/jira/browse/FLINK-8951
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> There are a few interesting use cases that can be addressed by queries that 
> follow the following pattern
> {code:sql}
> SELECT sensorId COUNT(*) OVER (PARTITION BY CEIL(rowtime TO HOUR) ORDER BY 
> temp ROWS BETWEEN UNBOUNDED preceding AND CURRENT ROW) FROM sensors
> {code}
> Such queries can be used to compute rolling cascading (tumbling) windows with 
> aggregates that are reset in regular intervals. This can be useful for TOP-K 
> per minute/hour/day queries.
> Right now, such {{OVER}} windows are not supported, because we require that 
> the {{ORDER BY}} clause is defined on a timestamp (time indicator) attribute. 
> In order to support this kind of queries, we would require that the 
> {{PARTITION BY}} clause contains a timestamp (time indicator) attribute or a 
> function that is defined on it and which is monotonicity preserving. Once the 
> optimizer identifies this case, it could translate the query into a special 
> time-partitioned OVER window operator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8951) Support OVER windows PARTITION BY (rounded) timestamp

2018-08-30 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597605#comment-16597605
 ] 

Hequn Cheng commented on FLINK-8951:


[~sandro19] Hi, do you have any plans. How would you like to implement it.

> Support OVER windows PARTITION BY (rounded) timestamp
> -
>
> Key: FLINK-8951
> URL: https://issues.apache.org/jira/browse/FLINK-8951
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> There are a few interesting use cases that can be addressed by queries that 
> follow the following pattern
> {code:sql}
> SELECT sensorId COUNT(*) OVER (PARTITION BY CEIL(rowtime TO HOUR) ORDER BY 
> temp ROWS BETWEEN UNBOUNDED preceding AND CURRENT ROW) FROM sensors
> {code}
> Such queries can be used to compute rolling cascading (tumbling) windows with 
> aggregates that are reset in regular intervals. This can be useful for TOP-K 
> per minute/hour/day queries.
> Right now, such {{OVER}} windows are not supported, because we require that 
> the {{ORDER BY}} clause is defined on a timestamp (time indicator) attribute. 
> In order to support this kind of queries, we would require that the 
> {{PARTITION BY}} clause contains a timestamp (time indicator) attribute or a 
> function that is defined on it and which is monotonicity preserving. Once the 
> optimizer identifies this case, it could translate the query into a special 
> time-partitioned OVER window operator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10263) User-defined function with LITERAL paramters yields CompileException

2018-08-30 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-10263:
-

 Summary: User-defined function with LITERAL paramters yields 
CompileException
 Key: FLINK-10263
 URL: https://issues.apache.org/jira/browse/FLINK-10263
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Affects Versions: 1.7.0
Reporter: Fabian Hueske
Assignee: Timo Walther


When using a user-defined scalar function only with literal parameters, a 
{{CompileException}} is thrown. For example

{code}
SELECT myFunc(CAST(40.750444 AS FLOAT), CAST(-73.993475 AS FLOAT))

public class MyFunc extends ScalarFunction {

public int eval(float lon, float lat) {
// do something
}
}
{code}

results in 

{code}
[ERROR] Could not execute SQL statement. Reason:
org.codehaus.commons.compiler.CompileException: Line 5, Column 10: Cannot 
determine simple type name "com"
{code}

The problem is probably caused by the expression reducer because it disappears 
if a regular attribute is added to a parameter expression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10262) Web UI does not show error stack trace any more when program submission fails

2018-08-30 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-10262:


 Summary: Web UI does not show error stack trace any more when 
program submission fails
 Key: FLINK-10262
 URL: https://issues.apache.org/jira/browse/FLINK-10262
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 1.6.0
Reporter: Stephan Ewen


Earlier versions reported the stack trace of exceptions that occurred in the 
program.
Flink 1.6 shows only 
{{org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error.}}, which makes debugging harder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10230) Support printing the query of a view

2018-08-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10230:
---
Labels: pull-request-available  (was: )

> Support printing the query of a view
> 
>
> Key: FLINK-10230
> URL: https://issues.apache.org/jira/browse/FLINK-10230
> Project: Flink
>  Issue Type: New Feature
>  Components: SQL Client
>Reporter: Timo Walther
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-10163 added initial support for views in SQL Client. We should add a 
> command that allows for printing the query of a view for debugging. MySQL 
> offers {{SHOW CREATE VIEW}} for this. Hive generalizes this to {{SHOW CREATE 
> TABLE}}. The latter one could be extended to also show information about the 
> used table factories and properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10230) Support printing the query of a view

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597589#comment-16597589
 ] 

ASF GitHub Bot commented on FLINK-10230:


yanghua opened a new pull request #6637: [FLINK-10230][SQL Client] Support 
printing the query of a view
URL: https://github.com/apache/flink/pull/6637
 
 
   ## What is the purpose of the change
   
   *This pull request support printing the query of a view*
   
   ## Brief change log
   
 - *Support printing the query of a view with statement `SHOW CREATE VIEW`*
   
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
*SqlCommandParserTest#testCommands and LocalExecutorITCase#testShowCreateView*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / **docs** / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support printing the query of a view
> 
>
> Key: FLINK-10230
> URL: https://issues.apache.org/jira/browse/FLINK-10230
> Project: Flink
>  Issue Type: New Feature
>  Components: SQL Client
>Reporter: Timo Walther
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-10163 added initial support for views in SQL Client. We should add a 
> command that allows for printing the query of a view for debugging. MySQL 
> offers {{SHOW CREATE VIEW}} for this. Hive generalizes this to {{SHOW CREATE 
> TABLE}}. The latter one could be extended to also show information about the 
> used table factories and properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua opened a new pull request #6637: [FLINK-10230][SQL Client] Support printing the query of a view

2018-08-30 Thread GitBox
yanghua opened a new pull request #6637: [FLINK-10230][SQL Client] Support 
printing the query of a view
URL: https://github.com/apache/flink/pull/6637
 
 
   ## What is the purpose of the change
   
   *This pull request support printing the query of a view*
   
   ## Brief change log
   
 - *Support printing the query of a view with statement `SHOW CREATE VIEW`*
   
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
*SqlCommandParserTest#testCommands and LocalExecutorITCase#testShowCreateView*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / **docs** / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10261) INSERT INTO does not work with ORDER BY clause

2018-08-30 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-10261:
-
Description: 
It seems that INSERT INTO and ORDER BY do not work well together.

An AssertionError is thrown and the ORDER BY clause is duplicated. I guess this 
is a Calcite issue.

Example:
{code}
@Test
  def testInsertIntoMemoryTable(): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
val tEnv = TableEnvironment.getTableEnvironment(env)
MemoryTableSourceSinkUtil.clear()

val t = StreamTestData.getSmall3TupleDataStream(env)
.assignAscendingTimestamps(x => x._2)
  .toTable(tEnv, 'a, 'b, 'c, 'rowtime.rowtime)
tEnv.registerTable("sourceTable", t)

val fieldNames = Array("d", "e", "f", "t")
val fieldTypes = Array(Types.INT, Types.LONG, Types.STRING, 
Types.SQL_TIMESTAMP)
  .asInstanceOf[Array[TypeInformation[_]]]
val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
tEnv.registerTableSink("targetTable", fieldNames, fieldTypes, sink)

val sql = "INSERT INTO targetTable SELECT a, b, c, rowtime FROM sourceTable 
ORDER BY a"
tEnv.sqlUpdate(sql)
env.execute()
{code}

Error:
{code}
java.lang.AssertionError: not a query: SELECT `sourceTable`.`a`, 
`sourceTable`.`b`, `sourceTable`.`c`, `sourceTable`.`rowtime`
FROM `sourceTable` AS `sourceTable`
ORDER BY `a`
ORDER BY `a`

at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3069)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:557)
at 
org.apache.flink.table.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:104)
at 
org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:717)
at 
org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:683)
at 
org.apache.flink.table.runtime.stream.sql.SqlITCase.testInsertIntoMemoryTable(SqlITCase.scala:735)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
{code}


  was:
It seems that INSERT INTO and ORDER BY do not work well together.

An AssertionError is thrown and the ORDER BY clause is duplicated.

Example:
{code}
@Test
  def testInsertIntoMemoryTable(): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
val tEnv = TableEnvironment.getTableEnvironment(env)
MemoryTableSourceSinkUtil.clear()

val t = StreamTestData.getSmall3TupleDataStream(env)
.assignAscendingTimestamps(x => x._2)
  .toTable(tEnv, 'a, 'b, 'c, 'rowtime.rowtime)
tEnv.registerTable("sourceTable", t)

val fieldNames = Array("d", "e", "f", "t")
val fieldTypes = Array(Types.INT, Types.LONG, Types.STRING, 
Types.SQL_TIMESTAMP)
  .asInstanceOf[Array[TypeInformation[_]]]
val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
tEnv.registerTableSink("targetTable", fieldNames, fieldTypes, sink)

val sql = "INSERT INTO targetTable SELECT a, b, c, rowtime FROM sourceTable 
ORDER BY a"
tEnv.sqlUpdate(sql)
env.execute()
{code}

Error:
{code}
java.lang.AssertionError: not a query: SELECT `sourceTable`.`a`, 
`sourceTable`.`b`, `sourceTable`.`c`, `sourceTable`.`rowtime`
FROM `sourceTable` AS `sourceTable`
ORDER BY `a`
ORDER BY `a`

at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3069)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:557)
at 
org.apache.flink.table.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:104)
at 
org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:717)
at 
org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:683)
at 
org.apache.flink.table.runtime.stream.sql.SqlITCase.testInsertIntoMemoryTable(SqlITCase.scala:735)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[jira] [Created] (FLINK-10261) INSERT INTO does not work with ORDER BY clause

2018-08-30 Thread Timo Walther (JIRA)
Timo Walther created FLINK-10261:


 Summary: INSERT INTO does not work with ORDER BY clause
 Key: FLINK-10261
 URL: https://issues.apache.org/jira/browse/FLINK-10261
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Reporter: Timo Walther


It seems that INSERT INTO and ORDER BY do not work well together.

An AssertionError is thrown and the ORDER BY clause is duplicated.

Example:
{code}
@Test
  def testInsertIntoMemoryTable(): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
val tEnv = TableEnvironment.getTableEnvironment(env)
MemoryTableSourceSinkUtil.clear()

val t = StreamTestData.getSmall3TupleDataStream(env)
.assignAscendingTimestamps(x => x._2)
  .toTable(tEnv, 'a, 'b, 'c, 'rowtime.rowtime)
tEnv.registerTable("sourceTable", t)

val fieldNames = Array("d", "e", "f", "t")
val fieldTypes = Array(Types.INT, Types.LONG, Types.STRING, 
Types.SQL_TIMESTAMP)
  .asInstanceOf[Array[TypeInformation[_]]]
val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
tEnv.registerTableSink("targetTable", fieldNames, fieldTypes, sink)

val sql = "INSERT INTO targetTable SELECT a, b, c, rowtime FROM sourceTable 
ORDER BY a"
tEnv.sqlUpdate(sql)
env.execute()
{code}

Error:
{code}
java.lang.AssertionError: not a query: SELECT `sourceTable`.`a`, 
`sourceTable`.`b`, `sourceTable`.`c`, `sourceTable`.`rowtime`
FROM `sourceTable` AS `sourceTable`
ORDER BY `a`
ORDER BY `a`

at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3069)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:557)
at 
org.apache.flink.table.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:104)
at 
org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:717)
at 
org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:683)
at 
org.apache.flink.table.runtime.stream.sql.SqlITCase.testInsertIntoMemoryTable(SqlITCase.scala:735)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10208) Bump mockito to 2.0+

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597555#comment-16597555
 ] 

ASF GitHub Bot commented on FLINK-10208:


zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump 
mockito to 2.21.0 / powermock to 2.0.0-beta.5
URL: https://github.com/apache/flink/pull/6634#discussion_r214063691
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TimerServiceTest.java
 ##
 @@ -22,7 +22,7 @@
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Test;
-import org.mockito.internal.util.reflection.Whitebox;
+import org.powermock.reflect.Whitebox;
 
 Review comment:
   We could, but in the long term I'd like to get rid of powermock completely.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bump mockito to 2.0+
> 
>
> Key: FLINK-10208
> URL: https://issues.apache.org/jira/browse/FLINK-10208
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Mockito only properly supports java 9 with version 2. We have to bump the 
> dependency and fix various API incompatibilities.
> Additionally we could investigate whether we still need powermock after 
> bumping the dependency (which we'd also have to bump otherwise).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump mockito to 2.21.0 / powermock to 2.0.0-beta.5

2018-08-30 Thread GitBox
zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump 
mockito to 2.21.0 / powermock to 2.0.0-beta.5
URL: https://github.com/apache/flink/pull/6634#discussion_r214063691
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TimerServiceTest.java
 ##
 @@ -22,7 +22,7 @@
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Test;
-import org.mockito.internal.util.reflection.Whitebox;
+import org.powermock.reflect.Whitebox;
 
 Review comment:
   We could, but in the long term I'd like to get rid of powermock completely.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-10260) Confusing log during TaskManager registration

2018-08-30 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-10260:


 Summary: Confusing log during TaskManager registration
 Key: FLINK-10260
 URL: https://issues.apache.org/jira/browse/FLINK-10260
 Project: Flink
  Issue Type: Bug
  Components: ResourceManager
Affects Versions: 1.6.0
Reporter: Stephan Ewen


During startup, when TaskManagers register, I see a lot of confusing log lines.

The below case happened during startup of a cloud setup where TaskManagers took 
a varying amount of time to start and might have started before the JobManager

{code}
-- Logs begin at Thu 2018-08-30 14:51:58 UTC, end at Thu 2018-08-30 14:55:39 
UTC. --
Aug 30 14:52:52 flink-belgium-1 systemd[1]: Started flink-jobmanager.service.
-- Subject: Unit flink-jobmanager.service has finished start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- Unit flink-jobmanager.service has finished starting up.
-- 
-- The start-up result is RESULT.
Aug 30 14:52:52 flink-belgium-1 jobmanager.sh[5416]: used deprecated key 
`jobmanager.heap.mb`, please replace with key `jobmanager.heap.size`
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: Starting standalonesession 
as a console application on host flink-belgium-1.
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,221 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 

Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,222 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Starting 
StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
Date:07.08.2018 @ 13:31:13 UTC)
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,222 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  OS 
current user: flink
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,718 
WARN  org.apache.hadoop.util.NativeCodeLoader   - Unable to 
load native-hadoop library for your platform... using builtin-java classes 
where applicable
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,847 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Current 
Hadoop/Kerberos user: flink
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,848 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM: 
OpenJDK 64-Bit Server VM - Oracle Corporation - 1.8/25.181-b13
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,848 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Maximum 
heap size: 1963 MiBytes
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,848 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  
JAVA_HOME: (not set)
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,853 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Hadoop 
version: 2.8.3
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,853 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM 
Options:
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,853 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Xms2048m
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,853 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Xmx2048m
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,853 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlog4j.configuration=file:/opt/flink/conf/log4j-console.properties
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,853 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,854 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Program 
Arguments:
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,854 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
--configDir
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,854 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
/opt/flink/conf
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,854 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
--executionMode
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 14:52:53,855 
INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
cluster
Aug 30 14:52:53 flink-belgium-1 jobmanager.sh[5416]: 2018-08-30 

[jira] [Commented] (FLINK-5029) Implement KvState SSL

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-5029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597550#comment-16597550
 ] 

ASF GitHub Bot commented on FLINK-5029:
---

EronWright commented on issue #6626: [FLINK-5029] [QueryableState] SSL Support
URL: https://github.com/apache/flink/pull/6626#issuecomment-417351436
 
 
   cc @kl0u @aljoscha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Implement KvState SSL
> -
>
> Key: FLINK-5029
> URL: https://issues.apache.org/jira/browse/FLINK-5029
> Project: Flink
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Eron Wright 
>Assignee: Eron Wright 
>Priority: Major
>  Labels: pull-request-available
>
> The KVState endpoint is new to 1.2 and should support SSL as the others do.
> Note that, with FLINK-4898, the SSL support code is decoupled from the 
> NettyClient/NettyServer, so can be used by the KvState code by simply 
> installing the `SSLProtocolHandler`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] EronWright commented on issue #6626: [FLINK-5029] [QueryableState] SSL Support

2018-08-30 Thread GitBox
EronWright commented on issue #6626: [FLINK-5029] [QueryableState] SSL Support
URL: https://github.com/apache/flink/pull/6626#issuecomment-417351436
 
 
   cc @kl0u @aljoscha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10229) Support listing of views

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597546#comment-16597546
 ] 

ASF GitHub Bot commented on FLINK-10229:


yanghua commented on issue #6631: [FLINK-10229][SQL CLIENT] Support listing of 
views
URL: https://github.com/apache/flink/pull/6631#issuecomment-417350379
 
 
   @walterddr and @hequn8128 Can you help me to review this PR? thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support listing of views
> 
>
> Key: FLINK-10229
> URL: https://issues.apache.org/jira/browse/FLINK-10229
> Project: Flink
>  Issue Type: New Feature
>  Components: SQL Client
>Reporter: Timo Walther
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-10163 added initial support of views for the SQL Client. According to 
> other database vendors, views are listed in the \{{SHOW TABLES}}. However, 
> there should be a way of listing only the views. We can support the \{{SHOW 
> VIEWS}} command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on issue #6631: [FLINK-10229][SQL CLIENT] Support listing of views

2018-08-30 Thread GitBox
yanghua commented on issue #6631: [FLINK-10229][SQL CLIENT] Support listing of 
views
URL: https://github.com/apache/flink/pull/6631#issuecomment-417350379
 
 
   @walterddr and @hequn8128 Can you help me to review this PR? thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10208) Bump mockito to 2.0+

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597542#comment-16597542
 ] 

ASF GitHub Bot commented on FLINK-10208:


zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump 
mockito to 2.21.0 / powermock to 2.0.0-beta.5
URL: https://github.com/apache/flink/pull/6634#discussion_r214063691
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TimerServiceTest.java
 ##
 @@ -22,7 +22,7 @@
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Test;
-import org.mockito.internal.util.reflection.Whitebox;
+import org.powermock.reflect.Whitebox;
 
 Review comment:
   We could, but in the long term I'd like to get rid of powermockito 
completely.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bump mockito to 2.0+
> 
>
> Key: FLINK-10208
> URL: https://issues.apache.org/jira/browse/FLINK-10208
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Mockito only properly supports java 9 with version 2. We have to bump the 
> dependency and fix various API incompatibilities.
> Additionally we could investigate whether we still need powermock after 
> bumping the dependency (which we'd also have to bump otherwise).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump mockito to 2.21.0 / powermock to 2.0.0-beta.5

2018-08-30 Thread GitBox
zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump 
mockito to 2.21.0 / powermock to 2.0.0-beta.5
URL: https://github.com/apache/flink/pull/6634#discussion_r214063691
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TimerServiceTest.java
 ##
 @@ -22,7 +22,7 @@
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Test;
-import org.mockito.internal.util.reflection.Whitebox;
+import org.powermock.reflect.Whitebox;
 
 Review comment:
   We could, but in the long term I'd like to get rid of powermockito 
completely.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10208) Bump mockito to 2.0+

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597541#comment-16597541
 ] 

ASF GitHub Bot commented on FLINK-10208:


zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump 
mockito to 2.21.0 / powermock to 2.0.0-beta.5
URL: https://github.com/apache/flink/pull/6634#discussion_r214063691
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TimerServiceTest.java
 ##
 @@ -22,7 +22,7 @@
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Test;
-import org.mockito.internal.util.reflection.Whitebox;
+import org.powermock.reflect.Whitebox;
 
 Review comment:
   We could, but in the long term I'd like to get rid of powermockito directly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bump mockito to 2.0+
> 
>
> Key: FLINK-10208
> URL: https://issues.apache.org/jira/browse/FLINK-10208
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Mockito only properly supports java 9 with version 2. We have to bump the 
> dependency and fix various API incompatibilities.
> Additionally we could investigate whether we still need powermock after 
> bumping the dependency (which we'd also have to bump otherwise).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump mockito to 2.21.0 / powermock to 2.0.0-beta.5

2018-08-30 Thread GitBox
zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump 
mockito to 2.21.0 / powermock to 2.0.0-beta.5
URL: https://github.com/apache/flink/pull/6634#discussion_r214063691
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TimerServiceTest.java
 ##
 @@ -22,7 +22,7 @@
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Test;
-import org.mockito.internal.util.reflection.Whitebox;
+import org.powermock.reflect.Whitebox;
 
 Review comment:
   We could, but in the long term I'd like to get rid of powermockito directly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-1960) Add comments and docs for withForwardedFields and related operators

2018-08-30 Thread Dimitris Palyvos (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-1960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597540#comment-16597540
 ] 

Dimitris Palyvos commented on FLINK-1960:
-

Hi,

This issue seems abandoned so I took the initiative to add some simple 
documentation based on the related java documentation.

Since this is my first contribution, I am not sure what the procedure is. Do I 
need to be the assignee of the issue, or can I just submit a pull request?

Thank you in advance!

> Add comments and docs for withForwardedFields and related operators
> ---
>
> Key: FLINK-1960
> URL: https://issues.apache.org/jira/browse/FLINK-1960
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Theodore Vasiloudis
>Assignee: hzhuangzhenxi
>Priority: Minor
>  Labels: documentation, starter
>
> The withForwardedFields and related operators have no docs for the Scala API. 
> It would be useful to have code comments and example usage in the docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10206) Add hbase sink connector

2018-08-30 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597538#comment-16597538
 ] 

Hequn Cheng commented on FLINK-10206:
-

Hi [~dangdangdang] It is better to have a design document first. There are some 
points we need to pay attention: (for streaming)
- AppendTableSink or UpsertTableSink.
- Provide batch mode for better performance.
- Failover should be taken into consideration(implements CheckpointedFunction).
- If it is a UpsertTableSink, how to process a delete message? There are two 
options: only delete columns or delete the whole rowkey.


> Add hbase sink connector
> 
>
> Key: FLINK-10206
> URL: https://issues.apache.org/jira/browse/FLINK-10206
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Affects Versions: 1.6.0
>Reporter: Igloo
>Assignee: Shimin Yang
>Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Now, there is a hbase source connector for batch operation. 
>  
> In some cases, we need to save Streaming/Batch results into hbase.  Just like 
> cassandra streaming/Batch sink implementations. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10050) Support 'allowedLateness' in CoGroupedStreams

2018-08-30 Thread eugen yushin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597537#comment-16597537
 ] 

eugen yushin commented on FLINK-10050:
--

thx, I'm proceeding with PR then
will keep you posted

> Support 'allowedLateness' in CoGroupedStreams
> -
>
> Key: FLINK-10050
> URL: https://issues.apache.org/jira/browse/FLINK-10050
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 1.5.1, 1.6.0
>Reporter: eugen yushin
>Priority: Major
>  Labels: ready-to-commit, windows
>
> WindowedStream has a support of 'allowedLateness' feature, while 
> CoGroupedStreams are not. At the mean time, WindowedStream is an inner part 
> of CoGroupedStreams and all main functionality (like evictor/trigger/...) is 
> simply delegated to WindowedStream.
> There's no chance to operate with late arriving data from previous steps in 
> cogroups (and joins). Consider the following flow:
> a. read data from source1 -> aggregate data with allowed lateness
> b. read data from source2 -> aggregate data with allowed lateness
> c. cogroup/join streams a and b, and compare aggregated values
> Step c doesn't accept any late data from steps a/b due to lack of 
> `allowedLateness` API call in CoGroupedStreams.java.
> Scope: add method `WithWindow.allowedLateness` to Java API 
> (flink-streaming-java) and extend scala API (flink-streaming-scala).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10248) Add DataSet HBase Sink

2018-08-30 Thread Hequn Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-10248:

Summary: Add DataSet HBase Sink  (was: Add HBase Table Sink)

> Add DataSet HBase Sink
> --
>
> Key: FLINK-10248
> URL: https://issues.apache.org/jira/browse/FLINK-10248
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Reporter: Shimin Yang
>Assignee: Shimin Yang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10206) Add hbase sink connector

2018-08-30 Thread Hequn Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-10206:

Description: 
Now, there is a hbase source connector for batch operation. 

 

In some cases, we need to save Streaming/Batch results into hbase.  Just like 
cassandra streaming/Batch sink implementations. 

 

  was:
Now, there is a hbase connector for batch operation. 

 

In some cases, we need to save Streaming result into hbase.  Just like 
cassandra streaming sink implementations. 

 


> Add hbase sink connector
> 
>
> Key: FLINK-10206
> URL: https://issues.apache.org/jira/browse/FLINK-10206
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Affects Versions: 1.6.0
>Reporter: Igloo
>Assignee: Shimin Yang
>Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Now, there is a hbase source connector for batch operation. 
>  
> In some cases, we need to save Streaming/Batch results into hbase.  Just like 
> cassandra streaming/Batch sink implementations. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10206) Add hbase sink connector

2018-08-30 Thread Hequn Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-10206:

Summary: Add hbase sink connector  (was: Add hbase stream connector)

> Add hbase sink connector
> 
>
> Key: FLINK-10206
> URL: https://issues.apache.org/jira/browse/FLINK-10206
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Affects Versions: 1.6.0
>Reporter: Igloo
>Assignee: Shimin Yang
>Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Now, there is a hbase connector for batch operation. 
>  
> In some cases, we need to save Streaming result into hbase.  Just like 
> cassandra streaming sink implementations. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10254) Fix inappropriate checkNotNull in stateBackend

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597514#comment-16597514
 ] 

ASF GitHub Bot commented on FLINK-10254:


Aitozi commented on issue #6632: [FLINK-10254][backend]Fix inappropriate 
checkNotNull in stateBackend
URL: https://github.com/apache/flink/pull/6632#issuecomment-417339692
 
 
   I didn't run into a problem, but when i review the code locally, i found 
that a checkNotNull do nothing to a primitive int value.It's indeed checked 
upwards, but i think its here for defensive code, so i push a little change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Fix inappropriate checkNotNull in stateBackend
> --
>
> Key: FLINK-10254
> URL: https://issues.apache.org/jira/browse/FLINK-10254
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.6.0
>Reporter: aitozi
>Assignee: aitozi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> The checkNotNull is unnecessary of numberOfKeyGroups with a primitive type , 
> we just have to make sure it is bigger than 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] Aitozi commented on issue #6632: [FLINK-10254][backend]Fix inappropriate checkNotNull in stateBackend

2018-08-30 Thread GitBox
Aitozi commented on issue #6632: [FLINK-10254][backend]Fix inappropriate 
checkNotNull in stateBackend
URL: https://github.com/apache/flink/pull/6632#issuecomment-417339692
 
 
   I didn't run into a problem, but when i review the code locally, i found 
that a checkNotNull do nothing to a primitive int value.It's indeed checked 
upwards, but i think its here for defensive code, so i push a little change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7360) Support Scala map type in Table API

2018-08-30 Thread xueyu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597513#comment-16597513
 ] 

xueyu commented on FLINK-7360:
--

It looks that flink scala does not support java.util.Map too, such as scala 
fromCollection
{code:java}
val data = List( (1, Collections.singletonMap("foo", "bar")), 
  (2, Collections.singletonMap("foo1", "bar1")), 
  (3, Collections.singletonMap("foo2", "bar2")) )
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.fromCollection(data)


{code}
Do we need to support this java map in scala? [~twalthr]

> Support Scala map type in Table API
> ---
>
> Key: FLINK-7360
> URL: https://issues.apache.org/jira/browse/FLINK-7360
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Priority: Major
>
> Currently, Flink SQL supports only Java `java.util.Map`. Scala maps are 
> treated as a blackbox with Flink `GenericTypeInfo`/SQL `ANY` data type. 
> Therefore, you can forward these blackboxes and use them within scalar 
> functions but accessing with the `['key']` operator is not supported.
> We should convert these special collections at the beginning, in order to use 
> in a SQL statement.
> See: 
> https://stackoverflow.com/questions/45471503/flink-table-api-sql-and-map-types-scala



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10208) Bump mockito to 2.0+

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597496#comment-16597496
 ] 

ASF GitHub Bot commented on FLINK-10208:


azagrebin commented on a change in pull request #6634: [FLINK-10208][build] 
Bump mockito to 2.21.0 / powermock to 2.0.0-beta.5
URL: https://github.com/apache/flink/pull/6634#discussion_r214009807
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TimerServiceTest.java
 ##
 @@ -22,7 +22,7 @@
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Test;
-import org.mockito.internal.util.reflection.Whitebox;
+import org.powermock.reflect.Whitebox;
 
 Review comment:
   Can this `org.powermock.reflect.Whitebox` be used everywhere instead of 
having Flink own Whitebox from old mockito?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bump mockito to 2.0+
> 
>
> Key: FLINK-10208
> URL: https://issues.apache.org/jira/browse/FLINK-10208
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Mockito only properly supports java 9 with version 2. We have to bump the 
> dependency and fix various API incompatibilities.
> Additionally we could investigate whether we still need powermock after 
> bumping the dependency (which we'd also have to bump otherwise).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] azagrebin commented on a change in pull request #6634: [FLINK-10208][build] Bump mockito to 2.21.0 / powermock to 2.0.0-beta.5

2018-08-30 Thread GitBox
azagrebin commented on a change in pull request #6634: [FLINK-10208][build] 
Bump mockito to 2.21.0 / powermock to 2.0.0-beta.5
URL: https://github.com/apache/flink/pull/6634#discussion_r214009807
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TimerServiceTest.java
 ##
 @@ -22,7 +22,7 @@
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Test;
-import org.mockito.internal.util.reflection.Whitebox;
+import org.powermock.reflect.Whitebox;
 
 Review comment:
   Can this `org.powermock.reflect.Whitebox` be used everywhere instead of 
having Flink own Whitebox from old mockito?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9391) Support UNNEST in Table API

2018-08-30 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597484#comment-16597484
 ] 

Hequn Cheng commented on FLINK-9391:


[~SokolovMS] Hi, I think we don't need to add an UNNEST operator. We can add a 
build-in unnest table function. Currently, there are no build-in table 
functions in Flink. You probably need to find a good way to implement it 
without any examples. However, there are build-in scalar functions and build-in 
aggregate functions(see FunctionCatalog class) and you can take a look how they 
are implemented.

I just take a look at the relevant code. Similar to 
Aggregation(org.apache.flink.table.expressions.Aggregation), maybe we can add 
another Expression type, say TableFunction Expression which can return a 
tableFunctionCall.

> Support UNNEST in Table API
> ---
>
> Key: FLINK-9391
> URL: https://issues.apache.org/jira/browse/FLINK-9391
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Alina Ipatina
>Priority: Major
>
> FLINK-6033 introduced the UNNEST function for SQL. We should also add this 
> function to the Table API to keep the APIs in sync. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8500) Get the timestamp of the Kafka message from kafka consumer(Kafka010Fetcher)

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597466#comment-16597466
 ] 

ASF GitHub Bot commented on FLINK-8500:
---

alexeyt820 commented on issue #6105: [FLINK-8500] Get the timestamp of the 
Kafka message from kafka consumer
URL: https://github.com/apache/flink/pull/6105#issuecomment-417324038
 
 
   @aljoscha, yes, I can do it. Should we still try to preserve compatibility 
with existing interface using _default_ method or just break it for good, what 
do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Get the timestamp of the Kafka message from kafka consumer(Kafka010Fetcher)
> ---
>
> Key: FLINK-8500
> URL: https://issues.apache.org/jira/browse/FLINK-8500
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.4.0
>Reporter: yanxiaobin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: image-2018-01-30-14-58-58-167.png, 
> image-2018-01-31-10-48-59-633.png
>
>
> The method deserialize of KeyedDeserializationSchema  needs a parameter 
> 'kafka message timestamp' (from ConsumerRecord) .In some business scenarios, 
> this is useful!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] alexeyt820 commented on issue #6105: [FLINK-8500] Get the timestamp of the Kafka message from kafka consumer

2018-08-30 Thread GitBox
alexeyt820 commented on issue #6105: [FLINK-8500] Get the timestamp of the 
Kafka message from kafka consumer
URL: https://github.com/apache/flink/pull/6105#issuecomment-417324038
 
 
   @aljoscha, yes, I can do it. Should we still try to preserve compatibility 
with existing interface using _default_ method or just break it for good, what 
do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-10259) Key validation for GroupWindowAggregate is broken

2018-08-30 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske reassigned FLINK-10259:
-

Assignee: Fabian Hueske

> Key validation for GroupWindowAggregate is broken
> -
>
> Key: FLINK-10259
> URL: https://issues.apache.org/jira/browse/FLINK-10259
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Major
> Fix For: 1.6.1, 1.7.0, 1.5.4
>
>
> WindowGroups have multiple equivalent keys (start, end) that should be 
> handled differently from other keys. The {{UpdatingPlanChecker}} uses 
> equivalence groups to identify equivalent keys but the keys of WindowGroups 
> are not correctly assigned to groups.
> This means that we cannot correctly extract keys from queries that use group 
> windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10259) Key validation for GroupWindowAggregate is broken

2018-08-30 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-10259:
-

 Summary: Key validation for GroupWindowAggregate is broken
 Key: FLINK-10259
 URL: https://issues.apache.org/jira/browse/FLINK-10259
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Reporter: Fabian Hueske
 Fix For: 1.6.1, 1.7.0, 1.5.4


WindowGroups have multiple equivalent keys (start, end) that should be handled 
differently from other keys. The {{UpdatingPlanChecker}} uses equivalence 
groups to identify equivalent keys but the keys of WindowGroups are not 
correctly assigned to groups.

This means that we cannot correctly extract keys from queries that use group 
windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10258) Allow streaming sources to be present for batch executions

2018-08-30 Thread Timo Walther (JIRA)
Timo Walther created FLINK-10258:


 Summary: Allow streaming sources to be present for batch 
executions 
 Key: FLINK-10258
 URL: https://issues.apache.org/jira/browse/FLINK-10258
 Project: Flink
  Issue Type: Bug
  Components: SQL Client
Reporter: Timo Walther


For example, if a filesystem connector with CSV format is defined and an update 
mode has been set. When switching to {{SET execution.type=batch}} in CLI the 
connector is not valid anymore and an exception blocks the execution of new SQL 
statements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10257) Incorrect CHAR type support in Flink SQL and Table API

2018-08-30 Thread Piotr Nowojski (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-10257:
---
Description: 
Despite that we officially do not support CHAR type, this type is visible and 
accessible for the users. First of all, string literals have default type of 
CHAR in SQL. Secondly users can always cast expressions/columns to CHAR.

Problem is that we do not support CHAR correctly. We mishandle it in:
 # comparisons and functions
 # writing values to sinks

According to SQL standard (and as almost all of the other databases do), CHAR 
comparisons should ignore white spaces. On the other hand, functions like 
{{CONCAT}} or {{LENGTH}} shouldn't: 
[http://troels.arvin.dk/db/rdbms/#data_types-char] .

Currently in In Flink we completely ignore those rules. Sometimes we store 
internally CHAR with padded spaces sometimes without. This results with semi 
random behaviour with respect to comparisons/functions/writing to sinks. For 
example following query:
{code:java}
tEnv.sqlQuery("SELECT CAST(s AS CHAR(10)) FROM 
sourceTable").insertInto("targetTable")
env.execute()
{code}
Where `sourceTable` has single {{VARCHAR(10)}} column with values: "Hi", 
"Hello", "Hello world", writes to sink not padded strings (correctly), but 
following query:
{code:java}
tEnv.sqlQuery("SELECT * FROM (SELECT CAST(s AS CHAR(10)) c FROM sourceTable) 
WHERE c = 'Hi'")
  .insertInto("targetTable")
env.execute(){code}
Incorrectly filters out all of the results, because {{CAST(s AS CHAR(10))}} is 
a NOOP in Flink, while 'Hi' constant handed by Calcite to us will be padded 
with 8 spaces.

On the other hand following query produces strings padded with spaces:
{code:java}
tEnv.sqlQuery("SELECT CASE l WHEN 1 THEN 'GERMANY' WHEN 2 THEN 'POLAND' ELSE 
'this should not happen' END FROM sourceTable")
  .insertInto("targetTable")
env.execute()

val expected = Seq(
  "GERMANY",
  "POLAND",
  "POLAND").mkString("\n")

org.junit.ComparisonFailure: Different elements in arrays: expected 3 elements 
and received 3
expected: [GERMANY, POLAND, POLAND]
received: [GERMANY , POLAND , POLAND ]
{code}
To make matter even worse, Calcite's constant folding correctly performs 
comparisons, while if same comparisons are performed by Flink, they yield 
different results. In other words in SQL:
{code:java}
SELECT 'POLAND' = 'POLAND'
{code}
return true, but same expression performed on columns
{code:java}
SELECT CAST(country as CHAR(10)) = CAST(country_padded as CHAR(10)) FROM 
countries{code}
returns false.

To further complicated things, in SQL our string literals have {{CHAR}} type, 
while in Table API our literals have String type (effectively {{VARCHAR}}) 
making results inconsistent between those two APIs.

 

CC [~twalthr] [~fhueske] [~hequn8128]

  was:
Despite that we officially do not support CHAR type, this type is visible and 
accessible for the users. First of all, string literals have default type of 
CHAR in SQL. Secondly users can always cast expressions/columns to CHAR.

Problem is that we do not support CHAR correctly. We mishandle it in:
 # comparisons and functions
 # writing values to sinks

According to SQL standard (and as almost all of the other databases do), CHAR 
comparisons should ignore white spaces. On the other hand, functions like 
{{CONCAT}} or {{LENGTH}} shouldn't: 
[http://troels.arvin.dk/db/rdbms/#data_types-char] .

Currently in In Flink we completely ignore those rules. Sometimes we store 
internally CHAR with padded spaces sometimes without. This results with semi 
random behaviour with respect to comparisons/functions/writing to sinks. For 
example following query:

 
{code:java}
tEnv.sqlQuery("SELECT CAST(s AS CHAR(10)) FROM 
sourceTable").insertInto("targetTable")
env.execute()
{code}
Where `sourceTable` has single {{VARCHAR(10)}} column with values: "Hi", 
"Hello", "Hello world", writes to sink not padded strings (correctly), but 
following query incorrectly

 
{code:java}
tEnv.sqlQuery("SELECT * FROM (SELECT CAST(s AS CHAR(10)) c FROM sourceTable) 
WHERE c = 'Hi'")
  .insertInto("targetTable")
env.execute(){code}
Incorrectly filters out all of the results, because {{CAST(s AS CHAR(10))}} is 
a NOOP in Flink, while 'Hi' constant handled by Calcite to us will be padded 
with 8 spaces.

On the other hand following query produces strings padded with spaces:

 
{code:java}
tEnv.sqlQuery("SELECT CASE l WHEN 1 THEN 'GERMANY' WHEN 2 THEN 'POLAND' ELSE 
'this should not happen' END FROM sourceTable")
  .insertInto("targetTable")
env.execute()

val expected = Seq(
  "GERMANY",
  "POLAND",
  "POLAND").mkString("\n")

org.junit.ComparisonFailure: Different elements in arrays: expected 3 elements 
and received 3
expected: [GERMANY, POLAND, POLAND]
received: [GERMANY , POLAND , POLAND ]
{code}
 

To make matter even worse, Calcite's constant folding correctly performs 
comparisons, while if same comparisons are 

[jira] [Comment Edited] (FLINK-10257) Incorrect CHAR type support in Flink SQL and Table API

2018-08-30 Thread Piotr Nowojski (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597434#comment-16597434
 ] 

Piotr Nowojski edited comment on FLINK-10257 at 8/30/18 1:14 PM:
-

I can think of about 3 solutions to this problem:
 # Drop {{CHAR}} support, and convert all string literals to {{VARCHAR}} in 
SQL. Any attempt to cast a column to {{CHAR}} should return an exception.
 # Provide internal {{CHAR}} support in SQL, while keeping it unsupported in 
Table API
 # Provide internal {{CHAR}} support both for SQL and Table API, potentially 
changing type of string literals in Table API to {{CHAR}} as well to make it 
consistent with SQL

For option 1., we could either:
 * convince Calcite to do this
 * or we would have to rewrite all {{CHAR}} types on our side in all RexNodes 
and RelNodes (similarly, but a bit more complicated to 
{{RelTimeIndicatorConverter}}

For option 2., we would need to properly support {{CHAR}} in all string 
functions and comparisons, with respect to padding. Probably to make things 
more consistent, we should make a contract that either we internally store 
{{CHAR}} always padded or never padded (now it's semi random). For writing to 
connectors we would need to cast all {{CHAR}} columns to {{VARCHAR}} which 
would require trimming. 

For option 3. we would additionally need to add support for {{CHAR}} in Table 
API.

 

If we could convince Calcite to provide {{VARCHAR}} / {{CHAR}} switch for 
literals, I would in favour of option 1. If not, then I would vote for option 
2. 


was (Author: pnowojski):
I can think of about 3 solutions to this problem:
 # Drop {{CHAR}} support, and convert all string literals to {{VARCHAR}} in 
SQL. Any attempt to cast a column to {{CHAR}} should return an exception.
 # Provide internal {{CHAR}} support in SQL, while keeping it unsupported in 
Table API
 # Provide internal {{CHAR}} support both for SQL and Table API, potentially 
changing type of string literals in Table API to {{CHAR}} as well to make it 
consistent with SQL

For option 1., we could either:
 * convince Calcite to do this
 * or we would have to rewrite all {{CHAR}} types on our side in all RexNodes 
and RelNodes (similarly, but a bit more complicated to 
{{RelTimeIndicatorConverter}}

For option 2., we would need to properly support {{CHAR}} in all string 
functions and comparisons, with respect to padding. Probably to make things 
more consistent, we should make a contract that either we internally store 
{{CHAR}} always padded or never padded (now it's semi random). For writing to 
connectors we would need to cast all {{CHAR}} columns to {{VARCHAR}} which 
would require trimming. 

For option 3. we would additionally need to add support for {{CHAR}} in Table 
API.

> Incorrect CHAR type support in Flink SQL and Table API
> --
>
> Key: FLINK-10257
> URL: https://issues.apache.org/jira/browse/FLINK-10257
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Piotr Nowojski
>Priority: Critical
>
> Despite that we officially do not support CHAR type, this type is visible and 
> accessible for the users. First of all, string literals have default type of 
> CHAR in SQL. Secondly users can always cast expressions/columns to CHAR.
> Problem is that we do not support CHAR correctly. We mishandle it in:
>  # comparisons and functions
>  # writing values to sinks
> According to SQL standard (and as almost all of the other databases do), CHAR 
> comparisons should ignore white spaces. On the other hand, functions like 
> {{CONCAT}} or {{LENGTH}} shouldn't: 
> [http://troels.arvin.dk/db/rdbms/#data_types-char] .
> Currently in In Flink we completely ignore those rules. Sometimes we store 
> internally CHAR with padded spaces sometimes without. This results with semi 
> random behaviour with respect to comparisons/functions/writing to sinks. For 
> example following query:
>  
> {code:java}
> tEnv.sqlQuery("SELECT CAST(s AS CHAR(10)) FROM 
> sourceTable").insertInto("targetTable")
> env.execute()
> {code}
> Where `sourceTable` has single {{VARCHAR(10)}} column with values: "Hi", 
> "Hello", "Hello world", writes to sink not padded strings (correctly), but 
> following query incorrectly
>  
> {code:java}
> tEnv.sqlQuery("SELECT * FROM (SELECT CAST(s AS CHAR(10)) c FROM sourceTable) 
> WHERE c = 'Hi'")
>   .insertInto("targetTable")
> env.execute(){code}
> Incorrectly filters out all of the results, because {{CAST(s AS CHAR(10))}} 
> is a NOOP in Flink, while 'Hi' constant handled by Calcite to us will be 
> padded with 8 spaces.
> On the other hand following query produces strings padded with spaces:
>  
> {code:java}
> tEnv.sqlQuery("SELECT CASE l WHEN 1 THEN 'GERMANY' WHEN 2 THEN 'POLAND' ELSE 
> 'this should not happen' END 

[jira] [Commented] (FLINK-10257) Incorrect CHAR type support in Flink SQL and Table API

2018-08-30 Thread Piotr Nowojski (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597434#comment-16597434
 ] 

Piotr Nowojski commented on FLINK-10257:


I can think of about 3 solutions to this problem:
 # Drop {{CHAR}} support, and convert all string literals to {{VARCHAR}} in 
SQL. Any attempt to cast a column to {{CHAR}} should return an exception.
 # Provide internal {{CHAR}} support in SQL, while keeping it unsupported in 
Table API
 # Provide internal {{CHAR}} support both for SQL and Table API, potentially 
changing type of string literals in Table API to {{CHAR}} as well to make it 
consistent with SQL

For option 1., we could either:
 * convince Calcite to do this
 * or we would have to rewrite all {{CHAR}} types on our side in all RexNodes 
and RelNodes (similarly, but a bit more complicated to 
{{RelTimeIndicatorConverter}}

For option 2., we would need to properly support {{CHAR}} in all string 
functions and comparisons, with respect to padding. Probably to make things 
more consistent, we should make a contract that either we internally store 
{{CHAR}} always padded or never padded (now it's semi random). For writing to 
connectors we would need to cast all {{CHAR}} columns to {{VARCHAR}} which 
would require trimming. 

For option 3. we would additionally need to add support for {{CHAR}} in Table 
API.

> Incorrect CHAR type support in Flink SQL and Table API
> --
>
> Key: FLINK-10257
> URL: https://issues.apache.org/jira/browse/FLINK-10257
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Piotr Nowojski
>Priority: Critical
>
> Despite that we officially do not support CHAR type, this type is visible and 
> accessible for the users. First of all, string literals have default type of 
> CHAR in SQL. Secondly users can always cast expressions/columns to CHAR.
> Problem is that we do not support CHAR correctly. We mishandle it in:
>  # comparisons and functions
>  # writing values to sinks
> According to SQL standard (and as almost all of the other databases do), CHAR 
> comparisons should ignore white spaces. On the other hand, functions like 
> {{CONCAT}} or {{LENGTH}} shouldn't: 
> [http://troels.arvin.dk/db/rdbms/#data_types-char] .
> Currently in In Flink we completely ignore those rules. Sometimes we store 
> internally CHAR with padded spaces sometimes without. This results with semi 
> random behaviour with respect to comparisons/functions/writing to sinks. For 
> example following query:
>  
> {code:java}
> tEnv.sqlQuery("SELECT CAST(s AS CHAR(10)) FROM 
> sourceTable").insertInto("targetTable")
> env.execute()
> {code}
> Where `sourceTable` has single {{VARCHAR(10)}} column with values: "Hi", 
> "Hello", "Hello world", writes to sink not padded strings (correctly), but 
> following query incorrectly
>  
> {code:java}
> tEnv.sqlQuery("SELECT * FROM (SELECT CAST(s AS CHAR(10)) c FROM sourceTable) 
> WHERE c = 'Hi'")
>   .insertInto("targetTable")
> env.execute(){code}
> Incorrectly filters out all of the results, because {{CAST(s AS CHAR(10))}} 
> is a NOOP in Flink, while 'Hi' constant handled by Calcite to us will be 
> padded with 8 spaces.
> On the other hand following query produces strings padded with spaces:
>  
> {code:java}
> tEnv.sqlQuery("SELECT CASE l WHEN 1 THEN 'GERMANY' WHEN 2 THEN 'POLAND' ELSE 
> 'this should not happen' END FROM sourceTable")
>   .insertInto("targetTable")
> env.execute()
> val expected = Seq(
>   "GERMANY",
>   "POLAND",
>   "POLAND").mkString("\n")
> org.junit.ComparisonFailure: Different elements in arrays: expected 3 
> elements and received 3
> expected: [GERMANY, POLAND, POLAND]
> received: [GERMANY , POLAND , POLAND ]
> {code}
>  
> To make matter even worse, Calcite's constant folding correctly performs 
> comparisons, while if same comparisons are performed by Flink, they yield 
> different results. In other words in SQL:
> {code:java}
> SELECT 'POLAND' = 'POLAND  '
> {code}
> return true, but same expression performed on columns
> {code:java}
> SELECT CAST(country as CHAR(10)) = CAST(country_padded as CHAR(10)) FROM 
> countries{code}
> returns false.
> To further complicated things, in SQL our string literals have {{CHAR}} type, 
> while in Table API our literals have String type (effectively {{VARCHAR}}) 
> making results inconsistent between those two APIs.
>  
> CC [~twalthr] [~fhueske] [~hequn8128]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10257) Incorrect CHAR type support in Flink SQL and Table API

2018-08-30 Thread Piotr Nowojski (JIRA)
Piotr Nowojski created FLINK-10257:
--

 Summary: Incorrect CHAR type support in Flink SQL and Table API
 Key: FLINK-10257
 URL: https://issues.apache.org/jira/browse/FLINK-10257
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Reporter: Piotr Nowojski


Despite that we officially do not support CHAR type, this type is visible and 
accessible for the users. First of all, string literals have default type of 
CHAR in SQL. Secondly users can always cast expressions/columns to CHAR.

Problem is that we do not support CHAR correctly. We mishandle it in:
 # comparisons and functions
 # writing values to sinks

According to SQL standard (and as almost all of the other databases do), CHAR 
comparisons should ignore white spaces. On the other hand, functions like 
{{CONCAT}} or {{LENGTH}} shouldn't: 
[http://troels.arvin.dk/db/rdbms/#data_types-char] .

Currently in In Flink we completely ignore those rules. Sometimes we store 
internally CHAR with padded spaces sometimes without. This results with semi 
random behaviour with respect to comparisons/functions/writing to sinks. For 
example following query:

 
{code:java}
tEnv.sqlQuery("SELECT CAST(s AS CHAR(10)) FROM 
sourceTable").insertInto("targetTable")
env.execute()
{code}
Where `sourceTable` has single {{VARCHAR(10)}} column with values: "Hi", 
"Hello", "Hello world", writes to sink not padded strings (correctly), but 
following query incorrectly

 
{code:java}
tEnv.sqlQuery("SELECT * FROM (SELECT CAST(s AS CHAR(10)) c FROM sourceTable) 
WHERE c = 'Hi'")
  .insertInto("targetTable")
env.execute(){code}
Incorrectly filters out all of the results, because {{CAST(s AS CHAR(10))}} is 
a NOOP in Flink, while 'Hi' constant handled by Calcite to us will be padded 
with 8 spaces.

On the other hand following query produces strings padded with spaces:

 
{code:java}
tEnv.sqlQuery("SELECT CASE l WHEN 1 THEN 'GERMANY' WHEN 2 THEN 'POLAND' ELSE 
'this should not happen' END FROM sourceTable")
  .insertInto("targetTable")
env.execute()

val expected = Seq(
  "GERMANY",
  "POLAND",
  "POLAND").mkString("\n")

org.junit.ComparisonFailure: Different elements in arrays: expected 3 elements 
and received 3
expected: [GERMANY, POLAND, POLAND]
received: [GERMANY , POLAND , POLAND ]
{code}
 

To make matter even worse, Calcite's constant folding correctly performs 
comparisons, while if same comparisons are performed by Flink, they yield 
different results. In other words in SQL:
{code:java}
SELECT 'POLAND' = 'POLAND  '
{code}
return true, but same expression performed on columns
{code:java}
SELECT CAST(country as CHAR(10)) = CAST(country_padded as CHAR(10)) FROM 
countries{code}
returns false.

To further complicated things, in SQL our string literals have {{CHAR}} type, 
while in Table API our literals have String type (effectively {{VARCHAR}}) 
making results inconsistent between those two APIs.

 

CC [~twalthr] [~fhueske] [~hequn8128]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10208) Bump mockito to 2.0+

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597417#comment-16597417
 ] 

ASF GitHub Bot commented on FLINK-10208:


zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump 
mockito to 2.21.0 / powermock to 2.0.0-beta.5
URL: https://github.com/apache/flink/pull/6634#discussion_r213907252
 
 

 ##
 File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/io/BarrierBufferTestBase.java
 ##
 @@ -531,7 +532,7 @@ public void testMultiChannelSkippingCheckpoints() throws 
Exception {
 
// checkpoint 3 aborted (end of partition)
check(sequence[20], buffer.getNextNonBlocked(), PAGE_SIZE);
-   verify(toNotify).abortCheckpointOnBarrier(eq(3L), 
any(CheckpointDeclineSubsumedException.class));
+   verify(toNotify).abortCheckpointOnBarrier(eq(3L), 
any(InputEndOfStreamException.class));
 
 Review comment:
   @NicoK @zhijiangW I'd love to get your input here. As far as i can tell the 
existing code was simple wrong, as this exception wasn't supposed to be thrown 
in this situation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bump mockito to 2.0+
> 
>
> Key: FLINK-10208
> URL: https://issues.apache.org/jira/browse/FLINK-10208
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Mockito only properly supports java 9 with version 2. We have to bump the 
> dependency and fix various API incompatibilities.
> Additionally we could investigate whether we still need powermock after 
> bumping the dependency (which we'd also have to bump otherwise).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump mockito to 2.21.0 / powermock to 2.0.0-beta.5

2018-08-30 Thread GitBox
zentol commented on a change in pull request #6634: [FLINK-10208][build] Bump 
mockito to 2.21.0 / powermock to 2.0.0-beta.5
URL: https://github.com/apache/flink/pull/6634#discussion_r213907252
 
 

 ##
 File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/io/BarrierBufferTestBase.java
 ##
 @@ -531,7 +532,7 @@ public void testMultiChannelSkippingCheckpoints() throws 
Exception {
 
// checkpoint 3 aborted (end of partition)
check(sequence[20], buffer.getNextNonBlocked(), PAGE_SIZE);
-   verify(toNotify).abortCheckpointOnBarrier(eq(3L), 
any(CheckpointDeclineSubsumedException.class));
+   verify(toNotify).abortCheckpointOnBarrier(eq(3L), 
any(InputEndOfStreamException.class));
 
 Review comment:
   @NicoK @zhijiangW I'd love to get your input here. As far as i can tell the 
existing code was simple wrong, as this exception wasn't supposed to be thrown 
in this situation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-5996) Jobmanager HA should not crash on lost ZK node

2018-08-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/FLINK-5996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominik Wosiński reassigned FLINK-5996:
---

Assignee: Dominik Wosiński

> Jobmanager HA should not crash on lost ZK node
> --
>
> Key: FLINK-5996
> URL: https://issues.apache.org/jira/browse/FLINK-5996
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.2.0
>Reporter: Gyula Fora
>Assignee: Dominik Wosiński
>Priority: Major
>
> Even if there are multiple zk hosts configured the jobmanager crashes if one 
> of them is lost:
> org.apache.flink.shaded.org.apache.curator.CuratorConnectionLossException: 
> KeeperErrorCode = ConnectionLoss
>   at 
> org.apache.flink.shaded.org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:197)
>   at 
> org.apache.flink.shaded.org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:87)
>   at 
> org.apache.flink.shaded.org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:115)
>   at 
> org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:477)
>   at 
> org.apache.flink.shaded.org.apache.curator.framework.imps.GetDataBuilderImpl$4.call(GetDataBuilderImpl.java:302)
>   at 
> org.apache.flink.shaded.org.apache.curator.framework.imps.GetDataBuilderImpl$4.call(GetDataBuilderImpl.java:291)
>   at 
> org.apache.flink.shaded.org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
>   at 
> org.apache.flink.shaded.org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
>   at 
> org.apache.flink.shaded.org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
>   at 
> org.apache.flink.shaded.org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
>   at 
> org.apache.flink.shaded.org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
>   at 
> org.apache.flink.shaded.org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
>   at 
> org.apache.flink.shaded.org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
>   at 
> org.apache.flink.shaded.org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> We should have some retry logic there



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10142) Reduce synchronization overhead for credit notifications

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597409#comment-16597409
 ] 

ASF GitHub Bot commented on FLINK-10142:


NicoK commented on issue #6555: [FLINK-10142][network] reduce locking around 
credit notification
URL: https://github.com/apache/flink/pull/6555#issuecomment-417307120
 
 
   Thanks @pnowojski and @zhijiangW  for the review!
   
   merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Reduce synchronization overhead for credit notifications
> 
>
> Key: FLINK-10142
> URL: https://issues.apache.org/jira/browse/FLINK-10142
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.2, 1.6.0, 1.7.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
>
> When credit-based flow control was introduced, we also added some checks and 
> optimisations for uncommon code paths that make common code paths 
> unnecessarily more expensive, e.g. checking whether a channel was released 
> before forwarding a credit notification to Netty. Such checks would have to 
> be confirmed by the Netty thread anyway and thus only add additional load for 
> something that happens only once (per channel).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] NicoK commented on issue #6555: [FLINK-10142][network] reduce locking around credit notification

2018-08-30 Thread GitBox
NicoK commented on issue #6555: [FLINK-10142][network] reduce locking around 
credit notification
URL: https://github.com/apache/flink/pull/6555#issuecomment-417307120
 
 
   Thanks @pnowojski and @zhijiangW  for the review!
   
   merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Clarkkkkk commented on issue #6628: [FLINK-10245] [Streaming Connector] Add Pojo, Tuple, Row and Scala Product DataStream Sink for HBase

2018-08-30 Thread GitBox
Clark commented on issue #6628: [FLINK-10245] [Streaming Connector] Add 
Pojo, Tuple, Row and Scala Product DataStream Sink for HBase
URL: https://github.com/apache/flink/pull/6628#issuecomment-417302988
 
 
   cc @tillrohrmann 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10245) Add DataStream HBase Sink

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597390#comment-16597390
 ] 

ASF GitHub Bot commented on FLINK-10245:


Clark commented on issue #6628: [FLINK-10245] [Streaming Connector] Add 
Pojo, Tuple, Row and Scala Product DataStream Sink for HBase
URL: https://github.com/apache/flink/pull/6628#issuecomment-417302988
 
 
   cc @tillrohrmann 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add DataStream HBase Sink
> -
>
> Key: FLINK-10245
> URL: https://issues.apache.org/jira/browse/FLINK-10245
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Reporter: Shimin Yang
>Assignee: Shimin Yang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] dawidwys closed pull request #6629: [hotfix] [docs] Fix typo in batch connectors

2018-08-30 Thread GitBox
dawidwys closed pull request #6629: [hotfix] [docs] Fix typo in batch connectors
URL: https://github.com/apache/flink/pull/6629
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/dev/batch/connectors.md b/docs/dev/batch/connectors.md
index 76771378f49..7d13d1e2a34 100644
--- a/docs/dev/batch/connectors.md
+++ b/docs/dev/batch/connectors.md
@@ -27,7 +27,7 @@ under the License.
 
 ## Reading from file systems
 
-Flink has build-in support for the following file systems:
+Flink has built-in support for the following file systems:
 
 | Filesystem| Scheme   | Notes  |
 | - |--| -- |
@@ -86,7 +86,7 @@ This section shows some examples for connecting Flink to 
other systems.
 
 ## Avro support in Flink
 
-Flink has extensive build-in support for [Apache 
Avro](http://avro.apache.org/). This allows to easily read from Avro files with 
Flink.
+Flink has extensive built-in support for [Apache 
Avro](http://avro.apache.org/). This allows to easily read from Avro files with 
Flink.
 Also, the serialization framework of Flink is able to handle classes generated 
from Avro schemas. Be sure to include the Flink Avro dependency to the pom.xml 
of your project.
 
 {% highlight xml %}


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7964) Add Apache Kafka 1.0/1.1 connectors

2018-08-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597348#comment-16597348
 ] 

ASF GitHub Bot commented on FLINK-7964:
---

yanghua commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 
connectors
URL: https://github.com/apache/flink/pull/6577#issuecomment-417290923
 
 
   @pnowojski Thank you for giving me some tips. I will first pass the current 
test case, then I will try to refactor `flink-connector-kafka-0.11` and 
`flink-connector-kafka-1.0`. If I have a question, I will ask you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Apache Kafka 1.0/1.1 connectors
> ---
>
> Key: FLINK-7964
> URL: https://issues.apache.org/jira/browse/FLINK-7964
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Kafka 1.0.0 is no mere bump of the version number. The Apache Kafka Project 
> Management Committee has packed a number of valuable enhancements into the 
> release. Here is a summary of a few of them:
> * Since its introduction in version 0.10, the Streams API has become hugely 
> popular among Kafka users, including the likes of Pinterest, Rabobank, 
> Zalando, and The New York Times. In 1.0, the the API continues to evolve at a 
> healthy pace. To begin with, the builder API has been improved (KIP-120). A 
> new API has been added to expose the state of active tasks at runtime 
> (KIP-130). The new cogroup API makes it much easier to deal with partitioned 
> aggregates with fewer StateStores and fewer moving parts in your code 
> (KIP-150). Debuggability gets easier with enhancements to the print() and 
> writeAsText() methods (KIP-160). And if that’s not enough, check out KIP-138 
> and KIP-161 too. For more on streams, check out the Apache Kafka Streams 
> documentation, including some helpful new tutorial videos.
> * Operating Kafka at scale requires that the system remain observable, and to 
> make that easier, we’ve made a number of improvements to metrics. These are 
> too many to summarize without becoming tedious, but Connect metrics have been 
> significantly improved (KIP-196), a litany of new health check metrics are 
> now exposed (KIP-188), and we now have a global topic and partition count 
> (KIP-168). Check out KIP-164 and KIP-187 for even more.
> * We now support Java 9, leading, among other things, to significantly faster 
> TLS and CRC32C implementations. Over-the-wire encryption will be faster now, 
> which will keep Kafka fast and compute costs low when encryption is enabled.
> * In keeping with the security theme, KIP-152 cleans up the error handling on 
> Simple Authentication Security Layer (SASL) authentication attempts. 
> Previously, some authentication error conditions were indistinguishable from 
> broker failures and were not logged in a clear way. This is cleaner now.
> * Kafka can now tolerate disk failures better. Historically, JBOD storage 
> configurations have not been recommended, but the architecture has 
> nevertheless been tempting: after all, why not rely on Kafka’s own 
> replication mechanism to protect against storage failure rather than using 
> RAID? With KIP-112, Kafka now handles disk failure more gracefully. A single 
> disk failure in a JBOD broker will not bring the entire broker down; rather, 
> the broker will continue serving any log files that remain on functioning 
> disks.
> * Since release 0.11.0, the idempotent producer (which is the producer used 
> in the presence of a transaction, which of course is the producer we use for 
> exactly-once processing) required max.in.flight.requests.per.connection to be 
> equal to one. As anyone who has written or tested a wire protocol can attest, 
> this put an upper bound on throughput. Thanks to KAFKA-5949, this can now be 
> as large as five, relaxing the throughput constraint quite a bit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 connectors

2018-08-30 Thread GitBox
yanghua commented on issue #6577: [FLINK-7964] Add Apache Kafka 1.0/1.1 
connectors
URL: https://github.com/apache/flink/pull/6577#issuecomment-417290923
 
 
   @pnowojski Thank you for giving me some tips. I will first pass the current 
test case, then I will try to refactor `flink-connector-kafka-0.11` and 
`flink-connector-kafka-1.0`. If I have a question, I will ask you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >