[GitHub] zentol opened a new pull request #6436: [FLINK-9978][release] Include relative path in source release sha file

2018-07-26 Thread GitBox
zentol opened a new pull request #6436: [FLINK-9978][release] Include relative 
path in source release sha file
URL: https://github.com/apache/flink/pull/6436
 
 
   ## What is the purpose of the change
   
   This PR fixes an issue with the sha generation for source release artifacts. 
The `sha512` file contained the absolute file path to the file.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-9978) Source release sha contains absolute file path

2018-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9978:
--
Labels: pull-request-available  (was: )

> Source release sha contains absolute file path
> --
>
> Key: FLINK-9978
> URL: https://issues.apache.org/jira/browse/FLINK-9978
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.5.2, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.5.3, 1.6.0
>
>
> The sha file for source releases contain the absolute path to the file, 
> causing failures during verification on other machines since the path cannot 
> be found.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9978) Source release sha contains absolute file path

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559331#comment-16559331
 ] 

ASF GitHub Bot commented on FLINK-9978:
---

zentol opened a new pull request #6436: [FLINK-9978][release] Include relative 
path in source release sha file
URL: https://github.com/apache/flink/pull/6436
 
 
   ## What is the purpose of the change
   
   This PR fixes an issue with the sha generation for source release artifacts. 
The `sha512` file contained the absolute file path to the file.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Source release sha contains absolute file path
> --
>
> Key: FLINK-9978
> URL: https://issues.apache.org/jira/browse/FLINK-9978
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.5.2, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.5.3, 1.6.0
>
>
> The sha file for source releases contain the absolute path to the file, 
> causing failures during verification on other machines since the path cannot 
> be found.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9978) Source release sha contains absolute file path

2018-07-26 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-9978:

Fix Version/s: (was: 1.5.2)
   1.5.3

> Source release sha contains absolute file path
> --
>
> Key: FLINK-9978
> URL: https://issues.apache.org/jira/browse/FLINK-9978
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.5.2, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.5.3, 1.6.0
>
>
> The sha file for source releases contain the absolute path to the file, 
> causing failures during verification on other machines since the path cannot 
> be found.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9978) Source release sha contains absolute file path

2018-07-26 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9978:
---

 Summary: Source release sha contains absolute file path
 Key: FLINK-9978
 URL: https://issues.apache.org/jira/browse/FLINK-9978
 Project: Flink
  Issue Type: Bug
  Components: Release System
Affects Versions: 1.5.2, 1.6.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.5.2, 1.6.0


The sha file for source releases contain the absolute path to the file, causing 
failures during verification on other machines since the path cannot be found.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9885) End-to-end test: Elasticsearch 6.x connector

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559302#comment-16559302
 ] 

ASF GitHub Bot commented on FLINK-9885:
---

tzulitai commented on a change in pull request #6391: [FLINK-9885] [FLINK-8101] 
Finalize Elasticsearch 6.x
URL: https://github.com/apache/flink/pull/6391#discussion_r205674745
 
 

 ##
 File path: 
flink-connectors/flink-connector-elasticsearch6/src/main/java/org/apache/flink/streaming/connectors/elasticsearch6/Elasticsearch6ApiCallBridge.java
 ##
 @@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.elasticsearch6;
+
+import 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchApiCallBridge;
+import 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.http.HttpHost;
+import org.elasticsearch.action.bulk.BackoffPolicy;
+import org.elasticsearch.action.bulk.BulkItemResponse;
+import org.elasticsearch.action.bulk.BulkProcessor;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestHighLevelClient;
+import org.elasticsearch.common.unit.TimeValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Implementation of {@link ElasticsearchApiCallBridge} for Elasticsearch 6 
and later versions.
+ */
+public class Elasticsearch6ApiCallBridge implements 
ElasticsearchApiCallBridge {
+
+   private static final long serialVersionUID = -5222683870097809633L;
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(Elasticsearch6ApiCallBridge.class);
+
+   /**
+* User-provided HTTP Host.
+*/
+   private final List httpHosts;
+
+   Elasticsearch6ApiCallBridge(List httpHosts) {
+   Preconditions.checkArgument(httpHosts != null && 
!httpHosts.isEmpty());
+   this.httpHosts = httpHosts;
+   }
+
+   @Override
+   public RestHighLevelClient createClient(Map 
clientConfig) {
+   RestHighLevelClient rhlClient =
 
 Review comment:
   Thanks @cjolif. Your suggestion to allow overrides sound good, I'll add that.
   Either way, I'll still try to add in the features you mentioned (IIRC, you 
already provided some code snippets for that somewhere).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> End-to-end test: Elasticsearch 6.x connector
> 
>
> Key: FLINK-9885
> URL: https://issues.apache.org/jira/browse/FLINK-9885
> Project: Flink
>  Issue Type: Sub-task
>  Components: ElasticSearch Connector, Tests
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> We have decided to try and merge the pending Elasticsearch 6.x PRs. This 
> should also come with an end-to-end test that covers this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tzulitai commented on a change in pull request #6391: [FLINK-9885] [FLINK-8101] Finalize Elasticsearch 6.x

2018-07-26 Thread GitBox
tzulitai commented on a change in pull request #6391: [FLINK-9885] [FLINK-8101] 
Finalize Elasticsearch 6.x
URL: https://github.com/apache/flink/pull/6391#discussion_r205674745
 
 

 ##
 File path: 
flink-connectors/flink-connector-elasticsearch6/src/main/java/org/apache/flink/streaming/connectors/elasticsearch6/Elasticsearch6ApiCallBridge.java
 ##
 @@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.elasticsearch6;
+
+import 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchApiCallBridge;
+import 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.http.HttpHost;
+import org.elasticsearch.action.bulk.BackoffPolicy;
+import org.elasticsearch.action.bulk.BulkItemResponse;
+import org.elasticsearch.action.bulk.BulkProcessor;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestHighLevelClient;
+import org.elasticsearch.common.unit.TimeValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Implementation of {@link ElasticsearchApiCallBridge} for Elasticsearch 6 
and later versions.
+ */
+public class Elasticsearch6ApiCallBridge implements 
ElasticsearchApiCallBridge {
+
+   private static final long serialVersionUID = -5222683870097809633L;
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(Elasticsearch6ApiCallBridge.class);
+
+   /**
+* User-provided HTTP Host.
+*/
+   private final List httpHosts;
+
+   Elasticsearch6ApiCallBridge(List httpHosts) {
+   Preconditions.checkArgument(httpHosts != null && 
!httpHosts.isEmpty());
+   this.httpHosts = httpHosts;
+   }
+
+   @Override
+   public RestHighLevelClient createClient(Map 
clientConfig) {
+   RestHighLevelClient rhlClient =
 
 Review comment:
   Thanks @cjolif. Your suggestion to allow overrides sound good, I'll add that.
   Either way, I'll still try to add in the features you mentioned (IIRC, you 
already provided some code snippets for that somewhere).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tzulitai opened a new pull request #6435: [FLINK-8994] [tests] Let general purpose DataStream jobs use Avro as state

2018-07-26 Thread GitBox
tzulitai opened a new pull request #6435: [FLINK-8994] [tests] Let general 
purpose DataStream jobs use Avro as state
URL: https://github.com/apache/flink/pull/6435
 
 
   ## What is the purpose of the change
   
   This improves the coverage of our general purpose DataStream job by having 
Avro type states.
   
   ## Brief change log
   
   - Add a keyed map operator that uses Avro type states
   - Strengthen the validation checks in the artificial keyed state mappers by 
also checking the sequence numbers.
   
   ## Verifying this change
   
   Run the savepoint end to end test. e.g.,
   `./run-single-test.sh test-scripts/test_resume_savepoint.sh 2 4 rocks false 
rocks`
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-8994) Add a test operator with keyed state that uses Avro serializer (from schema/by reflection)

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559299#comment-16559299
 ] 

ASF GitHub Bot commented on FLINK-8994:
---

tzulitai opened a new pull request #6435: [FLINK-8994] [tests] Let general 
purpose DataStream jobs use Avro as state
URL: https://github.com/apache/flink/pull/6435
 
 
   ## What is the purpose of the change
   
   This improves the coverage of our general purpose DataStream job by having 
Avro type states.
   
   ## Brief change log
   
   - Add a keyed map operator that uses Avro type states
   - Strengthen the validation checks in the artificial keyed state mappers by 
also checking the sequence numbers.
   
   ## Verifying this change
   
   Run the savepoint end to end test. e.g.,
   `./run-single-test.sh test-scripts/test_resume_savepoint.sh 2 4 rocks false 
rocks`
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add a test operator with keyed state that uses Avro serializer (from 
> schema/by reflection)
> --
>
> Key: FLINK-8994
> URL: https://issues.apache.org/jira/browse/FLINK-8994
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Assignee: Congxian Qiu
>Priority: Major
>  Labels: pull-request-available
>
> Add a test operator with keyed state that uses Avro serializer (from 
> schema/by reflection)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8994) Add a test operator with keyed state that uses Avro serializer (from schema/by reflection)

2018-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-8994:
--
Labels: pull-request-available  (was: )

> Add a test operator with keyed state that uses Avro serializer (from 
> schema/by reflection)
> --
>
> Key: FLINK-8994
> URL: https://issues.apache.org/jira/browse/FLINK-8994
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Assignee: Congxian Qiu
>Priority: Major
>  Labels: pull-request-available
>
> Add a test operator with keyed state that uses Avro serializer (from 
> schema/by reflection)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9977) Refine the docs for Table/SQL built-in functions

2018-07-26 Thread Xingcan Cui (JIRA)
Xingcan Cui created FLINK-9977:
--

 Summary: Refine the docs for Table/SQL built-in functions
 Key: FLINK-9977
 URL: https://issues.apache.org/jira/browse/FLINK-9977
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Xingcan Cui
Assignee: Xingcan Cui


There exist some syntax errors or inconsistencies in documents and Scala docs 
of the Table/SQL built-in functions. This issue aims to make some improvements 
to them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9825) Upgrade checkstyle version to 8.6

2018-07-26 Thread zhangminglei (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559160#comment-16559160
 ] 

zhangminglei commented on FLINK-9825:
-

Hi, [~Zentol] Could you please give [~lsy] a permission that contribute to 
apache flink ? It will be the first time to him for getting start with flink. 
Thank you so much.

> Upgrade checkstyle version to 8.6
> -
>
> Key: FLINK-9825
> URL: https://issues.apache.org/jira/browse/FLINK-9825
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Priority: Minor
>
> We should upgrade checkstyle version to 8.6+ so that we can use the "match 
> violation message to this regex" feature for suppression. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6968) Store streaming, updating tables with unique key in queryable state

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559159#comment-16559159
 ] 

ASF GitHub Bot commented on FLINK-6968:
---

liurenjie1024 commented on issue #5688: [FLINK-6968][Table API & SQL] Add 
Queryable table sink.
URL: https://github.com/apache/flink/pull/5688#issuecomment-408291783
 
 
   I'm sorry for messing this up with a wrong rebase. I've opened a cleaner PR 
https://github.com/apache/flink/pull/6434 for the same issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Store streaming, updating tables with unique key in queryable state
> ---
>
> Key: FLINK-6968
> URL: https://issues.apache.org/jira/browse/FLINK-6968
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
>
> Streaming tables with unique key are continuously updated. For example 
> queries with a non-windowed aggregation generate such tables. Commonly, such 
> updating tables are emitted via an upsert table sink to an external datastore 
> (k-v store, database) to make it accessible to applications.
> This issue is about adding a feature to store and maintain such a table as 
> queryable state in Flink. By storing the table in Flnk's queryable state, we 
> do not need an external data store to access the results of the query but can 
> query the results directly from Flink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] liurenjie1024 commented on issue #5688: [FLINK-6968][Table API & SQL] Add Queryable table sink.

2018-07-26 Thread GitBox
liurenjie1024 commented on issue #5688: [FLINK-6968][Table API & SQL] Add 
Queryable table sink.
URL: https://github.com/apache/flink/pull/5688#issuecomment-408291783
 
 
   I'm sorry for messing this up with a wrong rebase. I've opened a cleaner PR 
https://github.com/apache/flink/pull/6434 for the same issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-8354) Flink Kafka connector ignores Kafka message headers

2018-07-26 Thread Alexey Trenikhin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559157#comment-16559157
 ] 

Alexey Trenikhin commented on FLINK-8354:
-

In our case we are using Headers to propagate avro schema fingerprints, so to 
deserialize we need access to headers

> Flink Kafka connector ignores Kafka message  headers 
> -
>
> Key: FLINK-8354
> URL: https://issues.apache.org/jira/browse/FLINK-8354
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
> Environment: Kafka 0.11.0.0
> Flink 1.4.0
> flink-connector-kafka-0.11_2.11 
>Reporter: Mohammad Abareghi
>Assignee: Aegeaner
>Priority: Major
>
> Kafka has introduced notion of Header for messages in version 0.11.0.0  
> https://issues.apache.org/jira/browse/KAFKA-4208.
> But flink-connector-kafka-0.11_2.11 which supports kafka 0.11.0.0 ignores 
> headers when consuming kafka messages. 
> It would be useful in some scenarios, such as distributed log tracing, to 
> support message headers to FlinkKafkaConsumer011 and FlinkKafkaProducer011. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9825) Upgrade checkstyle version to 8.6

2018-07-26 Thread zhangminglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangminglei reassigned FLINK-9825:
---

Assignee: (was: zhangminglei)

> Upgrade checkstyle version to 8.6
> -
>
> Key: FLINK-9825
> URL: https://issues.apache.org/jira/browse/FLINK-9825
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Priority: Minor
>
> We should upgrade checkstyle version to 8.6+ so that we can use the "match 
> violation message to this regex" feature for suppression. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6968) Store streaming, updating tables with unique key in queryable state

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559156#comment-16559156
 ] 

ASF GitHub Bot commented on FLINK-6968:
---

liurenjie1024 commented on issue #6434: [FLINK-6968][Table API & SQL] Add 
Queryable table sink.
URL: https://github.com/apache/flink/pull/6434#issuecomment-408291551
 
 
   I'm sorry for messing up https://github.com/apache/flink/pull/5688 with a 
wrong rebase and opened this new PR.
   
   @twalthr Please help to review this? I've fixed you comments in 
https://github.com/apache/flink/pull/5688 and added integration test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Store streaming, updating tables with unique key in queryable state
> ---
>
> Key: FLINK-6968
> URL: https://issues.apache.org/jira/browse/FLINK-6968
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
>
> Streaming tables with unique key are continuously updated. For example 
> queries with a non-windowed aggregation generate such tables. Commonly, such 
> updating tables are emitted via an upsert table sink to an external datastore 
> (k-v store, database) to make it accessible to applications.
> This issue is about adding a feature to store and maintain such a table as 
> queryable state in Flink. By storing the table in Flnk's queryable state, we 
> do not need an external data store to access the results of the query but can 
> query the results directly from Flink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] liurenjie1024 commented on issue #6434: [FLINK-6968][Table API & SQL] Add Queryable table sink.

2018-07-26 Thread GitBox
liurenjie1024 commented on issue #6434: [FLINK-6968][Table API & SQL] Add 
Queryable table sink.
URL: https://github.com/apache/flink/pull/6434#issuecomment-408291551
 
 
   I'm sorry for messing up https://github.com/apache/flink/pull/5688 with a 
wrong rebase and opened this new PR.
   
   @twalthr Please help to review this? I've fixed you comments in 
https://github.com/apache/flink/pull/5688 and added integration test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-6968) Store streaming, updating tables with unique key in queryable state

2018-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-6968:
--
Labels: pull-request-available  (was: )

> Store streaming, updating tables with unique key in queryable state
> ---
>
> Key: FLINK-6968
> URL: https://issues.apache.org/jira/browse/FLINK-6968
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
>
> Streaming tables with unique key are continuously updated. For example 
> queries with a non-windowed aggregation generate such tables. Commonly, such 
> updating tables are emitted via an upsert table sink to an external datastore 
> (k-v store, database) to make it accessible to applications.
> This issue is about adding a feature to store and maintain such a table as 
> queryable state in Flink. By storing the table in Flnk's queryable state, we 
> do not need an external data store to access the results of the query but can 
> query the results directly from Flink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6968) Store streaming, updating tables with unique key in queryable state

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559155#comment-16559155
 ] 

ASF GitHub Bot commented on FLINK-6968:
---

liurenjie1024 opened a new pull request #6434: [FLINK-6968][Table API & SQL] 
Add Queryable table sink.
URL: https://github.com/apache/flink/pull/6434
 
 
   ## What is the purpose of the change
   
   Streaming tables with unique key are continuously updated. For example 
queries with a non-windowed aggregation generate such tables. Commonly, such 
updating tables are emitted via an upsert table sink to an external datastore 
(k-v store, database) to make it accessible to applications.
   
   ## Brief change log
   
 - *Add a QueryableStateTableSink.*
 - *States are queryable.*
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
 - *Added test that validates that states will be stored.*
   
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? docs
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Store streaming, updating tables with unique key in queryable state
> ---
>
> Key: FLINK-6968
> URL: https://issues.apache.org/jira/browse/FLINK-6968
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
>
> Streaming tables with unique key are continuously updated. For example 
> queries with a non-windowed aggregation generate such tables. Commonly, such 
> updating tables are emitted via an upsert table sink to an external datastore 
> (k-v store, database) to make it accessible to applications.
> This issue is about adding a feature to store and maintain such a table as 
> queryable state in Flink. By storing the table in Flnk's queryable state, we 
> do not need an external data store to access the results of the query but can 
> query the results directly from Flink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] liurenjie1024 opened a new pull request #6434: [FLINK-6968][Table API & SQL] Add Queryable table sink.

2018-07-26 Thread GitBox
liurenjie1024 opened a new pull request #6434: [FLINK-6968][Table API & SQL] 
Add Queryable table sink.
URL: https://github.com/apache/flink/pull/6434
 
 
   ## What is the purpose of the change
   
   Streaming tables with unique key are continuously updated. For example 
queries with a non-windowed aggregation generate such tables. Commonly, such 
updating tables are emitted via an upsert table sink to an external datastore 
(k-v store, database) to make it accessible to applications.
   
   ## Brief change log
   
 - *Add a QueryableStateTableSink.*
 - *States are queryable.*
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
 - *Added test that validates that states will be stored.*
   
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? docs
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9630) Kafka09PartitionDiscoverer cause connection leak on TopicAuthorizationException

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559146#comment-16559146
 ] 

ASF GitHub Bot commented on FLINK-9630:
---

yanghua commented on issue #6336: [FLINK-9630] [connector] 
Kafka09PartitionDiscoverer cause connection …
URL: https://github.com/apache/flink/pull/6336#issuecomment-408288375
 
 
   hi @ubyyj Till and Chesnay are busy releasing Flink 1.6 and 1.5.2, just wait~


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Kafka09PartitionDiscoverer cause connection leak on 
> TopicAuthorizationException
> ---
>
> Key: FLINK-9630
> URL: https://issues.apache.org/jira/browse/FLINK-9630
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.5.0, 1.4.2
> Environment: Linux 2.6, java 8, Kafka broker 0.10.x
>Reporter: Youjun Yuan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.5.3
>
>
> when the Kafka topic got deleted, during task starting process, 
> Kafka09PartitionDiscoverer will get a *TopicAuthorizationException* in 
> getAllPartitionsForTopics(), and it get no chance to close the  
> kafkaConsumer, hence resulting TCP connection leak (to Kafka broker).
>  
> *this issue can bring down the whole Flink cluster*, because, in a default 
> setup (fixedDelay with INT.MAX restart attempt), job manager will randomly 
> schedule the job to any TaskManager that has free slot, and each attemp will 
> cause the TaskManager to leak a TCP connection, eventually almost every 
> TaskManager will run out of file handle, hence no taskmanger could make 
> snapshot, or accept new job. Effectly stops the whole cluster.
>  
> The leak happens when StreamTask.invoke() calls openAllOperators(), then 
> FlinkKafkaConsumerBase.open() calls partitionDiscoverer.discoverPartitions(), 
> when kafkaConsumer.partitionsFor(topic) in 
> KafkaPartitionDiscoverer.getAllPartitionsForTopics() hit a 
> *TopicAuthorizationException,* no one catches this.
> Though StreamTask.open catches Exception and invoks the dispose() method of 
> each operator, which eventaully invoke FlinkKakfaConsumerBase.cancel(), 
> however it does not close the kakfaConsumer in partitionDiscoverer, not even 
> invoke the partitionDiscoverer.wakeup(), because the discoveryLoopThread was 
> null.
>  
> below is the code of FlinkKakfaConsumerBase.cancel() for your convenience
> public void cancel() {
>      // set ourselves as not running;
>      // this would let the main discovery loop escape as soon as possible
>      running = false;
>     if (discoveryLoopThread != null) {
>         if (partitionDiscoverer != null)
> {             // we cannot close the discoverer here, as it is error-prone to 
> concurrent access;             // only wakeup the discoverer, the discovery 
> loop will clean itself up after it escapes             
> partitionDiscoverer.wakeup();         }
>     // the discovery loop may currently be sleeping in-between
>      // consecutive discoveries; interrupt to shutdown faster
>      discoveryLoopThread.interrupt();
>      }
>     // abort the fetcher, if there is one
>      if (kafkaFetcher != null)
> {          kafkaFetcher.cancel();     }
> }
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on issue #6336: [FLINK-9630] [connector] Kafka09PartitionDiscoverer cause connection …

2018-07-26 Thread GitBox
yanghua commented on issue #6336: [FLINK-9630] [connector] 
Kafka09PartitionDiscoverer cause connection …
URL: https://github.com/apache/flink/pull/6336#issuecomment-408288375
 
 
   hi @ubyyj Till and Chesnay are busy releasing Flink 1.6 and 1.5.2, just wait~


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559142#comment-16559142
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on issue #6390: [FLINK-9915] Add TO_BASE64 function for 
table/sql API
URL: https://github.com/apache/flink/pull/6390#issuecomment-408287972
 
 
   @twalthr, please take a look. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] xccui commented on issue #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on issue #6390: [FLINK-9915] Add TO_BASE64 function for 
table/sql API
URL: https://github.com/apache/flink/pull/6390#issuecomment-408287972
 
 
   @twalthr, please take a look. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-7470) Acquire RM leadership before registering with Mesos

2018-07-26 Thread Renjie Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renjie Liu closed FLINK-7470.
-
Resolution: Duplicate

> Acquire RM leadership before registering with Mesos
> ---
>
> Key: FLINK-7470
> URL: https://issues.apache.org/jira/browse/FLINK-7470
> Project: Flink
>  Issue Type: Bug
>  Components: Mesos
>Reporter: Eron Wright 
>Priority: Major
> Fix For: 1.7.0
>
>
> Mesos doesn't support fencing tokens in the scheduler protocol; it assumes 
> external leader election among scheduler instances.   The last connection 
> wins; prior connections for a given framework ID are closed.
> The Mesos RM should not register as a framework until it has acquired RM 
> leadership.   Evolve the ResourceManager as necessary.   One option is to 
> introduce an ResourceManagerRunner that acquires leadership before starting 
> the RM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9651) Add a Kafka table source factory with Protobuf format support

2018-07-26 Thread zhangminglei (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559128#comment-16559128
 ] 

zhangminglei commented on FLINK-9651:
-

Hi, [~twalthr] Thanks and I am on my way to do this.

> Add a Kafka table source factory with Protobuf format support
> -
>
> Key: FLINK-9651
> URL: https://issues.apache.org/jira/browse/FLINK-9651
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: zhangminglei
>Assignee: zhangminglei
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9970) Add ASCII function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559100#comment-16559100
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

TisonKun commented on a change in pull request #6432: [FLINK-9970] Add ASCII 
function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r205643221
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -196,6 +196,19 @@ object ScalarFunctions {
 new String(data)
   }
 
+  /**
+* Returns the numeric value of the leftmost character of the string str.
+* Returns 0 if str is the empty string. Returns NULL if str is NULL.
+*/
+  def ascii(str: String): String = {
+if (str == null)
 
 Review comment:
   checkstyle would complain about no braces here, i.e. `if (...) { ... }`, 
also below


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : 
> https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun commented on a change in pull request #6432: [FLINK-9970] Add ASCII function for table/sql API

2018-07-26 Thread GitBox
TisonKun commented on a change in pull request #6432: [FLINK-9970] Add ASCII 
function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r205643221
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -196,6 +196,19 @@ object ScalarFunctions {
 new String(data)
   }
 
+  /**
+* Returns the numeric value of the leftmost character of the string str.
+* Returns 0 if str is the empty string. Returns NULL if str is NULL.
+*/
+  def ascii(str: String): String = {
+if (str == null)
 
 Review comment:
   checkstyle would complain about no braces here, i.e. `if (...) { ... }`, 
also below


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9970) Add ASCII function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559098#comment-16559098
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

TisonKun commented on a change in pull request #6432: [FLINK-9970] Add ASCII 
function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r205643221
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -196,6 +196,19 @@ object ScalarFunctions {
 new String(data)
   }
 
+  /**
+* Returns the numeric value of the leftmost character of the string str.
+* Returns 0 if str is the empty string. Returns NULL if str is NULL.
+*/
+  def ascii(str: String): String = {
+if (str == null)
 
 Review comment:
   checkstyle would complain about no braces here, i.e. `if (...) { ... }`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : 
> https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun commented on a change in pull request #6432: [FLINK-9970] Add ASCII function for table/sql API

2018-07-26 Thread GitBox
TisonKun commented on a change in pull request #6432: [FLINK-9970] Add ASCII 
function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r205643221
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -196,6 +196,19 @@ object ScalarFunctions {
 new String(data)
   }
 
+  /**
+* Returns the numeric value of the leftmost character of the string str.
+* Returns 0 if str is the empty string. Returns NULL if str is NULL.
+*/
+  def ascii(str: String): String = {
+if (str == null)
 
 Review comment:
   checkstyle would complain about no braces here, i.e. `if (...) { ... }`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-9693) Possible memory leak in jobmanager retaining archived checkpoints

2018-07-26 Thread Steven Zhen Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16556325#comment-16556325
 ] 

Steven Zhen Wu edited comment on FLINK-9693 at 7/27/18 12:57 AM:
-

We can actually reproduce the issue by killing jobmanager node for very large 
jobs, like parallelism over 1,000. This issue starts to appear when replacement 
jobmanager node came up. 

Another observation is that ~10 GB memory leak seems to happen very quickly 
(like < a few mins).

A few words about our setup
 * standalone cluster
 * HA mode using zookeeper
 * single jobmanager. we also tried two jobmanagers setup, same issue

 


was (Author: stevenz3wu):
We can actually reproduce the issue by killing jobmanager node for very large 
jobs, like parallelism over 1,000. This issue starts to appear when replacement 
jobmanager node came up. 

Another observation is that ~10 GB memory leak seems to happen very quickly 
(like < a few mins).

> Possible memory leak in jobmanager retaining archived checkpoints
> -
>
> Key: FLINK-9693
> URL: https://issues.apache.org/jira/browse/FLINK-9693
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, State Backends, Checkpointing
>Affects Versions: 1.5.0, 1.6.0
> Environment: !image.png!!image (1).png!
>Reporter: Steven Zhen Wu
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.4.3, 1.5.1, 1.6.0
>
> Attachments: 20180725_jm_mem_leak.png, 
> 41K_ExecutionVertex_objs_retained_9GB.png, ExecutionVertexZoomIn.png
>
>
> First, some context about the job
>  * Flink 1.4.1
>  * stand-alone deployment mode
>  * embarrassingly parallel: all operators are chained together
>  * parallelism is over 1,000
>  * stateless except for Kafka source operators. checkpoint size is 8.4 MB.
>  * set "state.backend.fs.memory-threshold" so that only jobmanager writes to 
> S3 to checkpoint
>  * internal checkpoint with 10 checkpoints retained in history
>  
> Summary of the observations
>  * 41,567 ExecutionVertex objects retained 9+ GB of memory
>  * Expanded in one ExecutionVertex. it seems to storing the kafka offsets for 
> source operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9630) Kafka09PartitionDiscoverer cause connection leak on TopicAuthorizationException

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559076#comment-16559076
 ] 

ASF GitHub Bot commented on FLINK-9630:
---

ubyyj commented on issue #6336: [FLINK-9630] [connector] 
Kafka09PartitionDiscoverer cause connection …
URL: https://github.com/apache/flink/pull/6336#issuecomment-408274936
 
 
   @tillrohrmann @zentol would you please take a look, and if everything looks 
good to you, can you please get this merged?  Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Kafka09PartitionDiscoverer cause connection leak on 
> TopicAuthorizationException
> ---
>
> Key: FLINK-9630
> URL: https://issues.apache.org/jira/browse/FLINK-9630
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.5.0, 1.4.2
> Environment: Linux 2.6, java 8, Kafka broker 0.10.x
>Reporter: Youjun Yuan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.5.3
>
>
> when the Kafka topic got deleted, during task starting process, 
> Kafka09PartitionDiscoverer will get a *TopicAuthorizationException* in 
> getAllPartitionsForTopics(), and it get no chance to close the  
> kafkaConsumer, hence resulting TCP connection leak (to Kafka broker).
>  
> *this issue can bring down the whole Flink cluster*, because, in a default 
> setup (fixedDelay with INT.MAX restart attempt), job manager will randomly 
> schedule the job to any TaskManager that has free slot, and each attemp will 
> cause the TaskManager to leak a TCP connection, eventually almost every 
> TaskManager will run out of file handle, hence no taskmanger could make 
> snapshot, or accept new job. Effectly stops the whole cluster.
>  
> The leak happens when StreamTask.invoke() calls openAllOperators(), then 
> FlinkKafkaConsumerBase.open() calls partitionDiscoverer.discoverPartitions(), 
> when kafkaConsumer.partitionsFor(topic) in 
> KafkaPartitionDiscoverer.getAllPartitionsForTopics() hit a 
> *TopicAuthorizationException,* no one catches this.
> Though StreamTask.open catches Exception and invoks the dispose() method of 
> each operator, which eventaully invoke FlinkKakfaConsumerBase.cancel(), 
> however it does not close the kakfaConsumer in partitionDiscoverer, not even 
> invoke the partitionDiscoverer.wakeup(), because the discoveryLoopThread was 
> null.
>  
> below is the code of FlinkKakfaConsumerBase.cancel() for your convenience
> public void cancel() {
>      // set ourselves as not running;
>      // this would let the main discovery loop escape as soon as possible
>      running = false;
>     if (discoveryLoopThread != null) {
>         if (partitionDiscoverer != null)
> {             // we cannot close the discoverer here, as it is error-prone to 
> concurrent access;             // only wakeup the discoverer, the discovery 
> loop will clean itself up after it escapes             
> partitionDiscoverer.wakeup();         }
>     // the discovery loop may currently be sleeping in-between
>      // consecutive discoveries; interrupt to shutdown faster
>      discoveryLoopThread.interrupt();
>      }
>     // abort the fetcher, if there is one
>      if (kafkaFetcher != null)
> {          kafkaFetcher.cancel();     }
> }
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ubyyj commented on issue #6336: [FLINK-9630] [connector] Kafka09PartitionDiscoverer cause connection …

2018-07-26 Thread GitBox
ubyyj commented on issue #6336: [FLINK-9630] [connector] 
Kafka09PartitionDiscoverer cause connection …
URL: https://github.com/apache/flink/pull/6336#issuecomment-408274936
 
 
   @tillrohrmann @zentol would you please take a look, and if everything looks 
good to you, can you please get this merged?  Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9967) Snapshot release branches aren't deployed

2018-07-26 Thread Thomas Weise (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559033#comment-16559033
 ] 

Thomas Weise commented on FLINK-9967:
-

Good this is fixed. We saw strange build errors with individual modules when we 
assumed that up-to-date snapshots are available remotely.

> Snapshot release branches aren't deployed
> -
>
> Key: FLINK-9967
> URL: https://issues.apache.org/jira/browse/FLINK-9967
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.2, 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>
> We no longer publish snapshot artifacts for {{release-1.4, release-1.5, 
> release-1.6}} branches. The [existing 
> artifacts|https://repository.apache.org/content/groups/snapshots/org/apache/flink/]
>  are all from the master branch when said version was the current one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9825) Upgrade checkstyle version to 8.6

2018-07-26 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9825:
--
Component/s: Build System

> Upgrade checkstyle version to 8.6
> -
>
> Key: FLINK-9825
> URL: https://issues.apache.org/jira/browse/FLINK-9825
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Assignee: zhangminglei
>Priority: Minor
>
> We should upgrade checkstyle version to 8.6+ so that we can use the "match 
> violation message to this regex" feature for suppression. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9962) allow users to specify TimeZone in DateTimeBucketer

2018-07-26 Thread Bowen Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558900#comment-16558900
 ] 

Bowen Li commented on FLINK-9962:
-

[~fhue...@gmail.com] what do you think of this change?

> allow users to specify TimeZone in DateTimeBucketer
> ---
>
> Key: FLINK-9962
> URL: https://issues.apache.org/jira/browse/FLINK-9962
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.7.0
>
>
> Currently {{DateTimeBucketer}} will return a bucket path by using local 
> timezone. We should add a {{timezone}} constructor param to allow users to 
> specify a timezone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-8873) move unit tests of KeyedStream from DataStreamTest to KeyedStreamTest

2018-07-26 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-8873.
---
Resolution: Won't Fix

> move unit tests of KeyedStream from DataStreamTest to KeyedStreamTest
> -
>
> Key: FLINK-8873
> URL: https://issues.apache.org/jira/browse/FLINK-8873
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API, Tests
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.6.0
>
>
> move unit tests of KeyedStream.scala from DataStreamTest.scala to 
> KeyedStreamTest.scala, in order to have clearer separation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9959) JoinFunction should be able to access its Window

2018-07-26 Thread Bowen Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558899#comment-16558899
 ] 

Bowen Li commented on FLINK-9959:
-

[~aljoscha] what do you think of this change?

> JoinFunction should be able to access its Window
> 
>
> Key: FLINK-9959
> URL: https://issues.apache.org/jira/browse/FLINK-9959
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.7.0
>
>
> e.g. currently, a windowed join looks like this, and the JoinFunction doesn't 
> have access to the Window it runs against.
> {code:java}
> A.join(B)
>   .where(...)
>   .equalTo(...)
>   .window(...)
>   .apply(new JoinFunction() {});
> {code}
> We can give JoinFunction access to its Window as {{JoinFunction OUT, Window>}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9946) Quickstart E2E test archetype version is hard-coded

2018-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9946:
--
Labels: pull-request-available  (was: )

> Quickstart E2E test archetype version is hard-coded
> ---
>
> Key: FLINK-9946
> URL: https://issues.apache.org/jira/browse/FLINK-9946
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> {code}
> mvn archetype:generate   \
> -DarchetypeGroupId=org.apache.flink  \
> -DarchetypeArtifactId=flink-quickstart-${TEST_TYPE}  \
> -DarchetypeVersion=1.6-SNAPSHOT  \
> -DgroupId=org.apache.flink.quickstart\
> -DartifactId=${ARTIFACT_ID}  \
> -Dversion=${ARTIFACT_VERSION}\
> -Dpackage=org.apache.flink.quickstart\
> -DinteractiveMode=false
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9946) Quickstart E2E test archetype version is hard-coded

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558875#comment-16558875
 ] 

ASF GitHub Bot commented on FLINK-9946:
---

zentol opened a new pull request #6433: [FLINK-9946][tests] Expose Flink 
version to E2E tests
URL: https://github.com/apache/flink/pull/6433
 
 
   ## What is the purpose of the change
   
   With this PR the flink version is determined from the 
`flink-end-to-end-tests` pom and exposed to test scripts. The primary purpose 
is to use this version in the quickstart end-to-end test to select the correct 
archetype version.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Quickstart E2E test archetype version is hard-coded
> ---
>
> Key: FLINK-9946
> URL: https://issues.apache.org/jira/browse/FLINK-9946
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> {code}
> mvn archetype:generate   \
> -DarchetypeGroupId=org.apache.flink  \
> -DarchetypeArtifactId=flink-quickstart-${TEST_TYPE}  \
> -DarchetypeVersion=1.6-SNAPSHOT  \
> -DgroupId=org.apache.flink.quickstart\
> -DartifactId=${ARTIFACT_ID}  \
> -Dversion=${ARTIFACT_VERSION}\
> -Dpackage=org.apache.flink.quickstart\
> -DinteractiveMode=false
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol opened a new pull request #6433: [FLINK-9946][tests] Expose Flink version to E2E tests

2018-07-26 Thread GitBox
zentol opened a new pull request #6433: [FLINK-9946][tests] Expose Flink 
version to E2E tests
URL: https://github.com/apache/flink/pull/6433
 
 
   ## What is the purpose of the change
   
   With this PR the flink version is determined from the 
`flink-end-to-end-tests` pom and exposed to test scripts. The primary purpose 
is to use this version in the quickstart end-to-end test to select the correct 
archetype version.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-8523) Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread Nico Kruber (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558825#comment-16558825
 ] 

Nico Kruber commented on FLINK-8523:


Sorry for coming back so late, but we recently discovered that it is actually 
very important not to hold on to the received (floating and exclusive) buffers 
during alignment, too, because then you would not use all available resources 
to do the alignment for the remaining channels. This may make alignments 
possibly worse than without flow control.

This ticket only prevents assigning new floating buffers to blocked channels 
but the effect is somewhat limited. What we would really need is to also free 
any floating buffers that we already have. Since we also have exclusive buffers 
in the {{BarrierBuffer}} and need to keep the order, [~StephanEwen] and me were 
thinking about 1) blocking the channel to not advertise more credit, 2) 
spilling all of the already received buffers to disk (as before) and therefore 
freeing all used floating buffers.

This ticket can be seen as a step towards that goal but I'd really like to see 
the whole thing alltogether (possible follow-up tasks or change the ticket).

> Stop assigning floating buffers for blocked input channels in exactly-once 
> mode
> ---
>
> Key: FLINK-8523
> URL: https://issues.apache.org/jira/browse/FLINK-8523
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Major
>  Labels: pull-request-available
>
> In exactly-once mode, the input channel is set blocked state when reading 
> barrier from it. And the blocked state will be released after barrier 
> alignment or cancelled.
>  
> In credit-based network flow control, we should avoid assigning floating 
> buffers for blocked input channels because the buffers after barrier will not 
> be processed by operator until alignment.
> To do so, we can fully make use of floating buffers and speed up barrier 
> alignment in some extent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8523) Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558810#comment-16558810
 ] 

ASF GitHub Bot commented on FLINK-8523:
---

NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205575969
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
 ##
 @@ -133,8 +134,13 @@
 * Input channels. There is a one input channel for each consumed 
intermediate result partition.
 * We store this in a map for runtime updates of single channels.
 */
+   @GuardedBy("requestLock")
private final Map 
inputChannels;
 
+   /** A mapping from internal channel index in this gate to input 
channel. */
+   @GuardedBy("requestLock")
+   private final Map indexToInputChannelMap;
 
 Review comment:
   and maybe call it `inputChannelsByChannelIndex` instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Stop assigning floating buffers for blocked input channels in exactly-once 
> mode
> ---
>
> Key: FLINK-8523
> URL: https://issues.apache.org/jira/browse/FLINK-8523
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Major
>  Labels: pull-request-available
>
> In exactly-once mode, the input channel is set blocked state when reading 
> barrier from it. And the blocked state will be released after barrier 
> alignment or cancelled.
>  
> In credit-based network flow control, we should avoid assigning floating 
> buffers for blocked input channels because the buffers after barrier will not 
> be processed by operator until alignment.
> To do so, we can fully make use of floating buffers and speed up barrier 
> alignment in some extent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8523) Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558811#comment-16558811
 ] 

ASF GitHub Bot commented on FLINK-8523:
---

NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205577375
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
 ##
 @@ -485,6 +494,23 @@ public void requestPartitions() throws IOException, 
InterruptedException {
}
}
 
+   @Override
+   public void blockInputChannel(int channelIndex) {
+   InputChannel inputChannel = 
indexToInputChannelMap.get(channelIndex);
+   if (inputChannel == null) {
+   throw new IllegalStateException("Could not find input 
channel from the channel index " + channelIndex);
+   }
+
+   inputChannel.setBlocked(true);
+   }
+
+   @Override
+   public void releaseBlockedInputChannels() {
+   for (InputChannel inputChannel : inputChannels.values()) {
+   inputChannel.setBlocked(false);
+   }
 
 Review comment:
   ~Do we need to make sure that there's no concurrent `blockInputChannel` call 
trying to block a channel for an alignment for a later checkpoint or can we 
assume here that we are still processing one alignment (doing the release) and 
therefore cannot concurrently block?~
   I guess we are safe here - `BarrierBuffer` actually does it the same way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Stop assigning floating buffers for blocked input channels in exactly-once 
> mode
> ---
>
> Key: FLINK-8523
> URL: https://issues.apache.org/jira/browse/FLINK-8523
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Major
>  Labels: pull-request-available
>
> In exactly-once mode, the input channel is set blocked state when reading 
> barrier from it. And the blocked state will be released after barrier 
> alignment or cancelled.
>  
> In credit-based network flow control, we should avoid assigning floating 
> buffers for blocked input channels because the buffers after barrier will not 
> be processed by operator until alignment.
> To do so, we can fully make use of floating buffers and speed up barrier 
> alignment in some extent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8523) Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558813#comment-16558813
 ] 

ASF GitHub Bot commented on FLINK-8523:
---

NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205576747
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
 ##
 @@ -485,6 +494,23 @@ public void requestPartitions() throws IOException, 
InterruptedException {
}
}
 
+   @Override
+   public void blockInputChannel(int channelIndex) {
+   InputChannel inputChannel = 
indexToInputChannelMap.get(channelIndex);
+   if (inputChannel == null) {
+   throw new IllegalStateException("Could not find input 
channel from the channel index " + channelIndex);
 
 Review comment:
   both argumentations sound right:
   how about `checkArgument(0 <= channelIndex && channelIndex < 
numberOfInputChannels)` and `checkState(inputChannel != null)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Stop assigning floating buffers for blocked input channels in exactly-once 
> mode
> ---
>
> Key: FLINK-8523
> URL: https://issues.apache.org/jira/browse/FLINK-8523
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Major
>  Labels: pull-request-available
>
> In exactly-once mode, the input channel is set blocked state when reading 
> barrier from it. And the blocked state will be released after barrier 
> alignment or cancelled.
>  
> In credit-based network flow control, we should avoid assigning floating 
> buffers for blocked input channels because the buffers after barrier will not 
> be processed by operator until alignment.
> To do so, we can fully make use of floating buffers and speed up barrier 
> alignment in some extent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8523) Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558814#comment-16558814
 ] 

ASF GitHub Bot commented on FLINK-8523:
---

NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205575797
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
 ##
 @@ -133,8 +134,13 @@
 * Input channels. There is a one input channel for each consumed 
intermediate result partition.
 * We store this in a map for runtime updates of single channels.
 */
+   @GuardedBy("requestLock")
private final Map 
inputChannels;
 
+   /** A mapping from internal channel index in this gate to input 
channel. */
+   @GuardedBy("requestLock")
+   private final Map indexToInputChannelMap;
 
 Review comment:
   Actually, this could be a simple array, couldn't it? If you look at 
`org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate#create` 
you'll see that `0 <= channelIndex < inputChannels.length`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Stop assigning floating buffers for blocked input channels in exactly-once 
> mode
> ---
>
> Key: FLINK-8523
> URL: https://issues.apache.org/jira/browse/FLINK-8523
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Major
>  Labels: pull-request-available
>
> In exactly-once mode, the input channel is set blocked state when reading 
> barrier from it. And the blocked state will be released after barrier 
> alignment or cancelled.
>  
> In credit-based network flow control, we should avoid assigning floating 
> buffers for blocked input channels because the buffers after barrier will not 
> be processed by operator until alignment.
> To do so, we can fully make use of floating buffers and speed up barrier 
> alignment in some extent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8523) Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558812#comment-16558812
 ] 

ASF GitHub Bot commented on FLINK-8523:
---

NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205579424
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/UnionInputGate.java
 ##
 @@ -82,30 +82,40 @@
 */
private final Map inputGateToIndexOffsetMap;
 
+   /** A mapping from logical channel index (internal channel index in 
input gate plus gate's offset) to input gate. */
+   private final Map indexToInputGateMap;
 
 Review comment:
   Similar to the `SingleInputGate`: use an array, rename member


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Stop assigning floating buffers for blocked input channels in exactly-once 
> mode
> ---
>
> Key: FLINK-8523
> URL: https://issues.apache.org/jira/browse/FLINK-8523
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Major
>  Labels: pull-request-available
>
> In exactly-once mode, the input channel is set blocked state when reading 
> barrier from it. And the blocked state will be released after barrier 
> alignment or cancelled.
>  
> In credit-based network flow control, we should avoid assigning floating 
> buffers for blocked input channels because the buffers after barrier will not 
> be processed by operator until alignment.
> To do so, we can fully make use of floating buffers and speed up barrier 
> alignment in some extent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread GitBox
NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205579424
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/UnionInputGate.java
 ##
 @@ -82,30 +82,40 @@
 */
private final Map inputGateToIndexOffsetMap;
 
+   /** A mapping from logical channel index (internal channel index in 
input gate plus gate's offset) to input gate. */
+   private final Map indexToInputGateMap;
 
 Review comment:
   Similar to the `SingleInputGate`: use an array, rename member


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread GitBox
NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205577375
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
 ##
 @@ -485,6 +494,23 @@ public void requestPartitions() throws IOException, 
InterruptedException {
}
}
 
+   @Override
+   public void blockInputChannel(int channelIndex) {
+   InputChannel inputChannel = 
indexToInputChannelMap.get(channelIndex);
+   if (inputChannel == null) {
+   throw new IllegalStateException("Could not find input 
channel from the channel index " + channelIndex);
+   }
+
+   inputChannel.setBlocked(true);
+   }
+
+   @Override
+   public void releaseBlockedInputChannels() {
+   for (InputChannel inputChannel : inputChannels.values()) {
+   inputChannel.setBlocked(false);
+   }
 
 Review comment:
   ~Do we need to make sure that there's no concurrent `blockInputChannel` call 
trying to block a channel for an alignment for a later checkpoint or can we 
assume here that we are still processing one alignment (doing the release) and 
therefore cannot concurrently block?~
   I guess we are safe here - `BarrierBuffer` actually does it the same way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread GitBox
NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205575797
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
 ##
 @@ -133,8 +134,13 @@
 * Input channels. There is a one input channel for each consumed 
intermediate result partition.
 * We store this in a map for runtime updates of single channels.
 */
+   @GuardedBy("requestLock")
private final Map 
inputChannels;
 
+   /** A mapping from internal channel index in this gate to input 
channel. */
+   @GuardedBy("requestLock")
+   private final Map indexToInputChannelMap;
 
 Review comment:
   Actually, this could be a simple array, couldn't it? If you look at 
`org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate#create` 
you'll see that `0 <= channelIndex < inputChannels.length`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread GitBox
NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205576747
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
 ##
 @@ -485,6 +494,23 @@ public void requestPartitions() throws IOException, 
InterruptedException {
}
}
 
+   @Override
+   public void blockInputChannel(int channelIndex) {
+   InputChannel inputChannel = 
indexToInputChannelMap.get(channelIndex);
+   if (inputChannel == null) {
+   throw new IllegalStateException("Could not find input 
channel from the channel index " + channelIndex);
 
 Review comment:
   both argumentations sound right:
   how about `checkArgument(0 <= channelIndex && channelIndex < 
numberOfInputChannels)` and `checkState(inputChannel != null)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread GitBox
NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205575969
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
 ##
 @@ -133,8 +134,13 @@
 * Input channels. There is a one input channel for each consumed 
intermediate result partition.
 * We store this in a map for runtime updates of single channels.
 */
+   @GuardedBy("requestLock")
private final Map 
inputChannels;
 
+   /** A mapping from internal channel index in this gate to input 
channel. */
+   @GuardedBy("requestLock")
+   private final Map indexToInputChannelMap;
 
 Review comment:
   and maybe call it `inputChannelsByChannelIndex` instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-8523) Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558793#comment-16558793
 ] 

ASF GitHub Bot commented on FLINK-8523:
---

NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205572919
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputChannel.java
 ##
 @@ -66,6 +66,9 @@
/** The current backoff (in ms) */
private int currentBackoff;
 
+   /** Flag indicating whether this channel is currently blocked or not. */
+   private volatile boolean isBlocked = false;
 
 Review comment:
   Does it make sense to have it in the `LocalInputChannel` then? I guess, we 
should not add dead code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Stop assigning floating buffers for blocked input channels in exactly-once 
> mode
> ---
>
> Key: FLINK-8523
> URL: https://issues.apache.org/jira/browse/FLINK-8523
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Major
>  Labels: pull-request-available
>
> In exactly-once mode, the input channel is set blocked state when reading 
> barrier from it. And the blocked state will be released after barrier 
> alignment or cancelled.
>  
> In credit-based network flow control, we should avoid assigning floating 
> buffers for blocked input channels because the buffers after barrier will not 
> be processed by operator until alignment.
> To do so, we can fully make use of floating buffers and speed up barrier 
> alignment in some extent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread GitBox
NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205572919
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputChannel.java
 ##
 @@ -66,6 +66,9 @@
/** The current backoff (in ms) */
private int currentBackoff;
 
+   /** Flag indicating whether this channel is currently blocked or not. */
+   private volatile boolean isBlocked = false;
 
 Review comment:
   Does it make sense to have it in the `LocalInputChannel` then? I guess, we 
should not add dead code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-8523) Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558791#comment-16558791
 ] 

ASF GitHub Bot commented on FLINK-8523:
---

NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205572300
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
 ##
 @@ -360,8 +360,9 @@ public boolean notifyBufferAvailable(Buffer buffer) {
 
// Important: double check the isReleased state inside 
synchronized block, so there is no
// race condition when notifyBufferAvailable and 
releaseAllResources running in parallel.
-   if (isReleased.get() || 
bufferQueue.getAvailableBufferSize() >= numRequiredBuffers) {
+   if (isReleased.get() || isBlocked() || 
bufferQueue.getAvailableBufferSize() >= numRequiredBuffers) {
 
 Review comment:
   Indeed, the "blocked" credits should be in `numRequiredBuffers` which was 
updated with the last buffer that we received. Any update on that will come 
with the next buffer. We just need to make sure that we acknowledge the newly 
freed credit to the sender (the backlog may be 0).
   
   The request for floating buffers should remain fair among all channels 
though and therefore we cannot request all floating buffers to satisfy that 
need at once!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Stop assigning floating buffers for blocked input channels in exactly-once 
> mode
> ---
>
> Key: FLINK-8523
> URL: https://issues.apache.org/jira/browse/FLINK-8523
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Major
>  Labels: pull-request-available
>
> In exactly-once mode, the input channel is set blocked state when reading 
> barrier from it. And the blocked state will be released after barrier 
> alignment or cancelled.
>  
> In credit-based network flow control, we should avoid assigning floating 
> buffers for blocked input channels because the buffers after barrier will not 
> be processed by operator until alignment.
> To do so, we can fully make use of floating buffers and speed up barrier 
> alignment in some extent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop assigning floating buffers for blocked input channels in exactly-once mode

2018-07-26 Thread GitBox
NicoK commented on a change in pull request #5381: [FLINK-8523][network] Stop 
assigning floating buffers for blocked input channels in exactly-once mode
URL: https://github.com/apache/flink/pull/5381#discussion_r205572300
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
 ##
 @@ -360,8 +360,9 @@ public boolean notifyBufferAvailable(Buffer buffer) {
 
// Important: double check the isReleased state inside 
synchronized block, so there is no
// race condition when notifyBufferAvailable and 
releaseAllResources running in parallel.
-   if (isReleased.get() || 
bufferQueue.getAvailableBufferSize() >= numRequiredBuffers) {
+   if (isReleased.get() || isBlocked() || 
bufferQueue.getAvailableBufferSize() >= numRequiredBuffers) {
 
 Review comment:
   Indeed, the "blocked" credits should be in `numRequiredBuffers` which was 
updated with the last buffer that we received. Any update on that will come 
with the next buffer. We just need to make sure that we acknowledge the newly 
freed credit to the sender (the backlog may be 0).
   
   The request for floating buffers should remain fair among all channels 
though and therefore we cannot request all floating buffers to satisfy that 
need at once!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-9976) Odd signatures for streaming file sink format builders

2018-07-26 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-9976:
---

Assignee: Chesnay Schepler

> Odd signatures for streaming file sink format builders
> --
>
> Key: FLINK-9976
> URL: https://issues.apache.org/jira/browse/FLINK-9976
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>
> There are 2 instances of apparently unnecessary generic parameters in the 
> format builders for the {{StreamingFileSink}}.
> Both these methods have a generic parameter for the BucketID type, however 
> the builder itself already has such a parameter. The methods use unchecked 
> casts to make the types fit, so we should be able to modify the signature to 
> use the builders parameter instead.
> {code}
> public static class RowFormatBuilder extends 
> StreamingFileSink.BucketsBuilder {
> ...
>   public  StreamingFileSink.RowFormatBuilder 
> withBucketerAndPolicy(final Bucketer bucketer, final 
> RollingPolicy policy) {
>   @SuppressWarnings("unchecked")
>   StreamingFileSink.RowFormatBuilder reInterpreted = 
> (StreamingFileSink.RowFormatBuilder) this;
>   reInterpreted.bucketer = Preconditions.checkNotNull(bucketer);
>   reInterpreted.rollingPolicy = 
> Preconditions.checkNotNull(policy);
>   return reInterpreted;
>   }
> ...
> {code}
> {code}
> public static class BulkFormatBuilder extends 
> StreamingFileSink.BucketsBuilder {
> ...
>   public  StreamingFileSink.BulkFormatBuilder 
> withBucketer(Bucketer bucketer) {
>   @SuppressWarnings("unchecked")
>   StreamingFileSink.BulkFormatBuilder reInterpreted = 
> (StreamingFileSink.BulkFormatBuilder) this;
>   reInterpreted.bucketer = Preconditions.checkNotNull(bucketer);
>   return reInterpreted;
>   }
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9976) Odd signatures for streaming file sink format builders

2018-07-26 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9976:
---

 Summary: Odd signatures for streaming file sink format builders
 Key: FLINK-9976
 URL: https://issues.apache.org/jira/browse/FLINK-9976
 Project: Flink
  Issue Type: Bug
  Components: Streaming Connectors
Affects Versions: 1.6.0
Reporter: Chesnay Schepler


There are 2 instances of apparently unnecessary generic parameters in the 
format builders for the {{StreamingFileSink}}.

Both these methods have a generic parameter for the BucketID type, however the 
builder itself already has such a parameter. The methods use unchecked casts to 
make the types fit, so we should be able to modify the signature to use the 
builders parameter instead.

{code}
public static class RowFormatBuilder extends 
StreamingFileSink.BucketsBuilder {
...
public  StreamingFileSink.RowFormatBuilder 
withBucketerAndPolicy(final Bucketer bucketer, final RollingPolicy policy) {
@SuppressWarnings("unchecked")
StreamingFileSink.RowFormatBuilder reInterpreted = 
(StreamingFileSink.RowFormatBuilder) this;
reInterpreted.bucketer = Preconditions.checkNotNull(bucketer);
reInterpreted.rollingPolicy = 
Preconditions.checkNotNull(policy);
return reInterpreted;
}
...
{code}

{code}
public static class BulkFormatBuilder extends 
StreamingFileSink.BucketsBuilder {
...
public  StreamingFileSink.BulkFormatBuilder 
withBucketer(Bucketer bucketer) {
@SuppressWarnings("unchecked")
StreamingFileSink.BulkFormatBuilder reInterpreted = 
(StreamingFileSink.BulkFormatBuilder) this;
reInterpreted.bucketer = Preconditions.checkNotNull(bucketer);
return reInterpreted;
}
...
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9897) Further enhance adaptive reads in Kinesis Connector to depend on run loop time

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558698#comment-16558698
 ] 

ASF GitHub Bot commented on FLINK-9897:
---

tweise commented on a change in pull request #6408: [FLINK-9897][Kinesis 
Connector] Make adaptive reads depend on run loop time instead of 
fetchintervalmillis
URL: https://github.com/apache/flink/pull/6408#discussion_r205551265
 
 

 ##
 File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java
 ##
 @@ -233,26 +225,69 @@ public void run() {

subscribedShard.getShard().getHashKeyRange().getEndingHashKey());
 
long recordBatchSizeBytes = 0L;
-   long averageRecordSizeBytes = 0L;
-
for (UserRecord record : 
fetchedRecords) {
recordBatchSizeBytes += 
record.getData().remaining();

deserializeRecordForCollectionAndUpdateState(record);
}
 
-   if (useAdaptiveReads && 
!fetchedRecords.isEmpty()) {
-   averageRecordSizeBytes = 
recordBatchSizeBytes / fetchedRecords.size();
-   maxNumberOfRecordsPerFetch = 
getAdaptiveMaxRecordsPerFetch(averageRecordSizeBytes);
-   }
-
nextShardItr = 
getRecordsResult.getNextShardIterator();
+
+   long processingEndTimeNanos = 
System.nanoTime();
+
+   long adjustmentEndTimeNanos = 
adjustRunLoopFrequency(processingStartTimeNanos, processingEndTimeNanos);
+   long runLoopTimeNanos = 
adjustmentEndTimeNanos - processingStartTimeNanos;
+   maxNumberOfRecordsPerFetch = 
adaptRecordsToRead(runLoopTimeNanos, fetchedRecords.size(), 
recordBatchSizeBytes);
+   processingStartTimeNanos = 
adjustmentEndTimeNanos; // for next time through the loop
}
}
} catch (Throwable t) {
fetcherRef.stopWithError(t);
}
}
 
+   /**
+* Adjusts loop timing to match target frequency if specified.
+* @param processingStartTimeNanos The start time of the run loop "work"
+* @param processingEndTimeNanos The end time of the run loop "work"
+* @return The System.nanoTime() after the sleep (if any)
+* @throws InterruptedException
+*/
+   protected long adjustRunLoopFrequency(long processingStartTimeNanos, 
long processingEndTimeNanos)
+   throws InterruptedException {
+   long endTimeNanos = processingEndTimeNanos;
+   if (fetchIntervalMillis != 0) {
+   long processingTimeNanos = processingEndTimeNanos - 
processingStartTimeNanos;
+   long sleepTimeMillis = fetchIntervalMillis - 
(processingTimeNanos / 1_000_000);
+   if (sleepTimeMillis > 0) {
+   Thread.sleep(sleepTimeMillis);
+   endTimeNanos = System.nanoTime();
+   }
+   }
+   return endTimeNanos;
+   }
+
+   /**
+* Calculates how many records to read each time through the loop based 
on a target throughput
+* and the measured frequenecy of the loop.
+* @param runLoopTimeNanos The total time of one pass through the loop
+* @param numRecords The number of records of the last read operation
+* @param recordBatchSizeBytes The total batch size of the last read 
operation
+*/
+   protected int adaptRecordsToRead(long runLoopTimeNanos, int numRecords, 
long recordBatchSizeBytes) {
 
 Review comment:
   +1 except that it shouldn't be static so that a subclass can override it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Further enhance adaptive reads in Kinesis Connector to depend on run loop time
> --
>
> Key: FLINK-9897
> URL: https://issues.apache.org/jira/browse/FLINK-9897
> Project: Flink
>  Issue Type: Improvement
>   

[GitHub] tweise commented on a change in pull request #6408: [FLINK-9897][Kinesis Connector] Make adaptive reads depend on run loop time instead of fetchintervalmillis

2018-07-26 Thread GitBox
tweise commented on a change in pull request #6408: [FLINK-9897][Kinesis 
Connector] Make adaptive reads depend on run loop time instead of 
fetchintervalmillis
URL: https://github.com/apache/flink/pull/6408#discussion_r205551265
 
 

 ##
 File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java
 ##
 @@ -233,26 +225,69 @@ public void run() {

subscribedShard.getShard().getHashKeyRange().getEndingHashKey());
 
long recordBatchSizeBytes = 0L;
-   long averageRecordSizeBytes = 0L;
-
for (UserRecord record : 
fetchedRecords) {
recordBatchSizeBytes += 
record.getData().remaining();

deserializeRecordForCollectionAndUpdateState(record);
}
 
-   if (useAdaptiveReads && 
!fetchedRecords.isEmpty()) {
-   averageRecordSizeBytes = 
recordBatchSizeBytes / fetchedRecords.size();
-   maxNumberOfRecordsPerFetch = 
getAdaptiveMaxRecordsPerFetch(averageRecordSizeBytes);
-   }
-
nextShardItr = 
getRecordsResult.getNextShardIterator();
+
+   long processingEndTimeNanos = 
System.nanoTime();
+
+   long adjustmentEndTimeNanos = 
adjustRunLoopFrequency(processingStartTimeNanos, processingEndTimeNanos);
+   long runLoopTimeNanos = 
adjustmentEndTimeNanos - processingStartTimeNanos;
+   maxNumberOfRecordsPerFetch = 
adaptRecordsToRead(runLoopTimeNanos, fetchedRecords.size(), 
recordBatchSizeBytes);
+   processingStartTimeNanos = 
adjustmentEndTimeNanos; // for next time through the loop
}
}
} catch (Throwable t) {
fetcherRef.stopWithError(t);
}
}
 
+   /**
+* Adjusts loop timing to match target frequency if specified.
+* @param processingStartTimeNanos The start time of the run loop "work"
+* @param processingEndTimeNanos The end time of the run loop "work"
+* @return The System.nanoTime() after the sleep (if any)
+* @throws InterruptedException
+*/
+   protected long adjustRunLoopFrequency(long processingStartTimeNanos, 
long processingEndTimeNanos)
+   throws InterruptedException {
+   long endTimeNanos = processingEndTimeNanos;
+   if (fetchIntervalMillis != 0) {
+   long processingTimeNanos = processingEndTimeNanos - 
processingStartTimeNanos;
+   long sleepTimeMillis = fetchIntervalMillis - 
(processingTimeNanos / 1_000_000);
+   if (sleepTimeMillis > 0) {
+   Thread.sleep(sleepTimeMillis);
+   endTimeNanos = System.nanoTime();
+   }
+   }
+   return endTimeNanos;
+   }
+
+   /**
+* Calculates how many records to read each time through the loop based 
on a target throughput
+* and the measured frequenecy of the loop.
+* @param runLoopTimeNanos The total time of one pass through the loop
+* @param numRecords The number of records of the last read operation
+* @param recordBatchSizeBytes The total batch size of the last read 
operation
+*/
+   protected int adaptRecordsToRead(long runLoopTimeNanos, int numRecords, 
long recordBatchSizeBytes) {
 
 Review comment:
   +1 except that it shouldn't be static so that a subclass can override it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558631#comment-16558631
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

yanghua commented on issue #6390: [FLINK-9915] Add TO_BASE64 function for 
table/sql API
URL: https://github.com/apache/flink/pull/6390#issuecomment-408169883
 
 
   @xccui thanks for your suggestion, fixed them~


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on issue #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
yanghua commented on issue #6390: [FLINK-9915] Add TO_BASE64 function for 
table/sql API
URL: https://github.com/apache/flink/pull/6390#issuecomment-408169883
 
 
   @xccui thanks for your suggestion, fixed them~


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558607#comment-16558607
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

yanghua commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205531038
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -2474,17 +2474,30 @@ STRING.rpad(len INT, pad STRING)
 Returns a string right-padded with the given pad string to a length 
of len characters. If the string is longer than len, the return value is 
shortened to len characters. E.g. "hi".rpad(4, '??') returns "hi??",  
"hi".rpad(1, '??') returns "h".
   
 
+
 
   
 {% highlight java %}
 STRING.fromBase64()
 {% endhighlight %}
   
-
+  
 
 Review comment:
   OK, it is because of rebasing, will take care.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
yanghua commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205531038
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -2474,17 +2474,30 @@ STRING.rpad(len INT, pad STRING)
 Returns a string right-padded with the given pad string to a length 
of len characters. If the string is longer than len, the return value is 
shortened to len characters. E.g. "hi".rpad(4, '??') returns "hi??",  
"hi".rpad(1, '??') returns "h".
   
 
+
 
   
 {% highlight java %}
 STRING.fromBase64()
 {% endhighlight %}
   
-
+  
 
 Review comment:
   OK, it is because of rebasing, will take care.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9970) Add ASCII function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558605#comment-16558605
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

yanghua commented on issue #6432: [FLINK-9970] Add ASCII function for table/sql 
API
URL: https://github.com/apache/flink/pull/6432#issuecomment-408164654
 
 
   @twalthr Really sorry about my careless. For this PR, I just want to push as 
soon as possible, because it is mid-night in China, so did not check carefully, 
sorry for that. Now, it has been updated, please review it again.
   Actually, I have contributed to table module recently and implemented five 
functions, they are `fromBase64`, `toBase64`, `log2(x)`, `ascii`, `chr`.
   `ascii` and `chr(char)` is a pair functions in many RDBMS to exchange ASCII 
code and character(string).
The purpose is I want to enhance Flink table/sql's functions. I know there 
are some functions more important, I will try to implement them in the future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : 
> https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on issue #6432: [FLINK-9970] Add ASCII function for table/sql API

2018-07-26 Thread GitBox
yanghua commented on issue #6432: [FLINK-9970] Add ASCII function for table/sql 
API
URL: https://github.com/apache/flink/pull/6432#issuecomment-408164654
 
 
   @twalthr Really sorry about my careless. For this PR, I just want to push as 
soon as possible, because it is mid-night in China, so did not check carefully, 
sorry for that. Now, it has been updated, please review it again.
   Actually, I have contributed to table module recently and implemented five 
functions, they are `fromBase64`, `toBase64`, `log2(x)`, `ascii`, `chr`.
   `ascii` and `chr(char)` is a pair functions in many RDBMS to exchange ASCII 
code and character(string).
The purpose is I want to enhance Flink table/sql's functions. I know there 
are some functions more important, I will try to implement them in the future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558588#comment-16558588
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205524441
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
 ##
 @@ -549,6 +549,11 @@ trait ImplicitExpressionOperations {
 */
   def fromBase64() = FromBase64(expr)
 
+  /**
+* Returns a string's representation that encoded as base64.
 
 Review comment:
Returns the base64-encoded result of the input string.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558585#comment-16558585
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205526614
 
 

 ##
 File path: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/expressions/ScalarFunctionsTest.scala
 ##
 @@ -472,6 +472,40 @@ class ScalarFunctionsTest extends ScalarTypesTestBase {
   "null")
   }
 
+  @Test
+  def testToBase64(): Unit = {
+testAllApis(
+  'f0.toBase64(),
+  "f0.toBase64()",
+  "TO_BASE64(f0)",
+  "VGhpcyBpcyBhIHRlc3QgU3RyaW5nLg==")
+
+testAllApis(
+  'f8.toBase64(),
+  "f8.toBase64()",
+  "TO_BASE64(f8)",
+  "IFRoaXMgaXMgYSB0ZXN0IFN0cmluZy4g")
+
+//null test
+testAllApis(
+  'f33.toBase64(),
+  "f33.toBase64()",
+  "TO_BASE64(f33)",
+  "null")
+
+testAllApis(
 
 Review comment:
   This is actually not a test case for null.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558580#comment-16558580
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205522289
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -2474,17 +2474,30 @@ STRING.rpad(len INT, pad STRING)
 Returns a string right-padded with the given pad string to a length 
of len characters. If the string is longer than len, the return value is 
shortened to len characters. E.g. "hi".rpad(4, '??') returns "hi??",  
"hi".rpad(1, '??') returns "h".
   
 
+
 
   
 {% highlight java %}
 STRING.fromBase64()
 {% endhighlight %}
   
-
+  
 
 Review comment:
   Try to avoid meaningless updates like that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558583#comment-16558583
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205525765
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/stringExpressions.scala
 ##
 @@ -382,4 +382,32 @@ case class FromBase64(child: Expression) extends 
UnaryExpression with InputTypeS
   }
 
   override def toString: String = s"($child).fromBase64"
+
+}
+
+/**
+  * Returns the string encoded with base64.
 
 Review comment:
   Returns the base64-encoded result of the input string.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558589#comment-16558589
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205525970
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -201,4 +201,9 @@ object ScalarFunctions {
 */
   def fromBase64(str: String): String = new String(Base64.decodeBase64(str))
 
+  /**
+* Returns a string's representation that encoded as base64.
 
 Review comment:
   Returns the base64-encoded result of the input string.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558579#comment-16558579
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205522776
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -2474,17 +2474,30 @@ STRING.rpad(len INT, pad STRING)
 Returns a string right-padded with the given pad string to a length 
of len characters. If the string is longer than len, the return value is 
shortened to len characters. E.g. "hi".rpad(4, '??') returns "hi??",  
"hi".rpad(1, '??') returns "h".
   
 
+
 
   
 {% highlight java %}
 STRING.fromBase64()
 {% endhighlight %}
   
-
+  
   
 Returns the base string decoded with base64, if string is null, 
returns null. E.g. "aGVsbG8gd29ybGQ=".fromBase64() returns "hello world".
   
 
+
+
+  
+{% highlight java %}
+STRING.toBase64()
+{% endhighlight %}
+  
+
+  
+Returns the string text encoded with BASE64. E.g. "hello 
world".toBase64() returns "aGVsbG8gd29ybGQ=".
 
 Review comment:
   Returns the base64-encoded result of STRING. If STRING is NULL, returns NULL.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558587#comment-16558587
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205526999
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/functions/sql/ScalarSqlFunctions.scala
 ##
 @@ -177,4 +177,13 @@ object ScalarSqlFunctions {
 SqlFunctionCategory.STRING
   )
 
+  val TO_BASE64 = new SqlFunction(
+"TO_BASE64",
+SqlKind.OTHER_FUNCTION,
+ReturnTypes.cascade(ReturnTypes.explicit(SqlTypeName.VARCHAR), 
SqlTypeTransforms.TO_NULLABLE),
+InferTypes.RETURN_TYPE,
+OperandTypes.family(SqlTypeFamily.STRING),
 
 Review comment:
   Can directly use `OperandTypes.STRING` here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558584#comment-16558584
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205524074
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -2474,17 +2474,30 @@ STRING.rpad(len INT, pad STRING)
 Returns a string right-padded with the given pad string to a length 
of len characters. If the string is longer than len, the return value is 
shortened to len characters. E.g. "hi".rpad(4, '??') returns "hi??",  
"hi".rpad(1, '??') returns "h".
   
 
+
 
   
 {% highlight java %}
 STRING.fromBase64()
 {% endhighlight %}
   
-
+  
   
 Returns the base string decoded with base64, if string is null, 
returns null. E.g. "aGVsbG8gd29ybGQ=".fromBase64() returns "hello world".
   
 
+
+
+  
+{% highlight java %}
 
 Review comment:
   Add the corresponding docs for scala.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558582#comment-16558582
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205523208
 
 

 ##
 File path: docs/dev/table/sql.md
 ##
 @@ -1848,6 +1848,17 @@ FROM_BASE64(text string)
   
 Returns the base string decoded with base64, if text is NULL, 
returns NULL. E.g. FROM_BASE64('aGVsbG8gd29ybGQ=') returns 
hello world.
   
+  
+
+
+  
+{% highlight text %}
+TO_BASE64(text string)
+{% endhighlight %}
+  
+  
+Returns the string text encoded with BASE64, if string is null, 
returns null. E.g. TO_BASE64('hello world') returns 
aGVsbG8gd29ybGQ=.
 
 Review comment:
   Returns the base64-encoded result of the input string. If the input is NULL, 
returns NULL.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558581#comment-16558581
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205522908
 
 

 ##
 File path: docs/dev/table/sql.md
 ##
 @@ -1848,6 +1848,17 @@ FROM_BASE64(text string)
   
 Returns the base string decoded with base64, if text is NULL, 
returns NULL. E.g. FROM_BASE64('aGVsbG8gd29ybGQ=') returns 
hello world.
   
+  
+
+
+  
+{% highlight text %}
+TO_BASE64(text string)
 
 Review comment:
   TO_BASE64(string)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9915) Add TO_BASE64 function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558586#comment-16558586
 ] 

ASF GitHub Bot commented on FLINK-9915:
---

xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205524700
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/stringExpressions.scala
 ##
 @@ -382,4 +382,32 @@ case class FromBase64(child: Expression) extends 
UnaryExpression with InputTypeS
   }
 
   override def toString: String = s"($child).fromBase64"
+
+}
+
+/**
+  * Returns the string encoded with base64.
+  * Returns NULL If the input string is NULL.
+  */
+case class ToBase64(child: Expression) extends UnaryExpression with 
InputTypeSpec {
+
+  override private[flink] def expectedTypes: Seq[TypeInformation[_]] = 
Seq(STRING_TYPE_INFO)
+
+  override private[flink] def resultType: TypeInformation[_] = STRING_TYPE_INFO
+
+  override private[flink] def validateInput(): ValidationResult = {
+if (child.resultType == STRING_TYPE_INFO) {
+  ValidationSuccess
+} else {
+  ValidationFailure(s"ToBase64 operator requires String input, " +
 
 Review comment:
   ToBase64 operator requires **a** String input


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add TO_BASE64 function for table/sql API
> 
>
> Key: FLINK-9915
> URL: https://issues.apache.org/jira/browse/FLINK-9915
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to mysql TO_BASE64 function : 
> https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205524441
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
 ##
 @@ -549,6 +549,11 @@ trait ImplicitExpressionOperations {
 */
   def fromBase64() = FromBase64(expr)
 
+  /**
+* Returns a string's representation that encoded as base64.
 
 Review comment:
Returns the base64-encoded result of the input string.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205525765
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/stringExpressions.scala
 ##
 @@ -382,4 +382,32 @@ case class FromBase64(child: Expression) extends 
UnaryExpression with InputTypeS
   }
 
   override def toString: String = s"($child).fromBase64"
+
+}
+
+/**
+  * Returns the string encoded with base64.
 
 Review comment:
   Returns the base64-encoded result of the input string.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205524700
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/stringExpressions.scala
 ##
 @@ -382,4 +382,32 @@ case class FromBase64(child: Expression) extends 
UnaryExpression with InputTypeS
   }
 
   override def toString: String = s"($child).fromBase64"
+
+}
+
+/**
+  * Returns the string encoded with base64.
+  * Returns NULL If the input string is NULL.
+  */
+case class ToBase64(child: Expression) extends UnaryExpression with 
InputTypeSpec {
+
+  override private[flink] def expectedTypes: Seq[TypeInformation[_]] = 
Seq(STRING_TYPE_INFO)
+
+  override private[flink] def resultType: TypeInformation[_] = STRING_TYPE_INFO
+
+  override private[flink] def validateInput(): ValidationResult = {
+if (child.resultType == STRING_TYPE_INFO) {
+  ValidationSuccess
+} else {
+  ValidationFailure(s"ToBase64 operator requires String input, " +
 
 Review comment:
   ToBase64 operator requires **a** String input


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205526999
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/functions/sql/ScalarSqlFunctions.scala
 ##
 @@ -177,4 +177,13 @@ object ScalarSqlFunctions {
 SqlFunctionCategory.STRING
   )
 
+  val TO_BASE64 = new SqlFunction(
+"TO_BASE64",
+SqlKind.OTHER_FUNCTION,
+ReturnTypes.cascade(ReturnTypes.explicit(SqlTypeName.VARCHAR), 
SqlTypeTransforms.TO_NULLABLE),
+InferTypes.RETURN_TYPE,
+OperandTypes.family(SqlTypeFamily.STRING),
 
 Review comment:
   Can directly use `OperandTypes.STRING` here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205525970
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -201,4 +201,9 @@ object ScalarFunctions {
 */
   def fromBase64(str: String): String = new String(Base64.decodeBase64(str))
 
+  /**
+* Returns a string's representation that encoded as base64.
 
 Review comment:
   Returns the base64-encoded result of the input string.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205523208
 
 

 ##
 File path: docs/dev/table/sql.md
 ##
 @@ -1848,6 +1848,17 @@ FROM_BASE64(text string)
   
 Returns the base string decoded with base64, if text is NULL, 
returns NULL. E.g. FROM_BASE64('aGVsbG8gd29ybGQ=') returns 
hello world.
   
+  
+
+
+  
+{% highlight text %}
+TO_BASE64(text string)
+{% endhighlight %}
+  
+  
+Returns the string text encoded with BASE64, if string is null, 
returns null. E.g. TO_BASE64('hello world') returns 
aGVsbG8gd29ybGQ=.
 
 Review comment:
   Returns the base64-encoded result of the input string. If the input is NULL, 
returns NULL.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205524074
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -2474,17 +2474,30 @@ STRING.rpad(len INT, pad STRING)
 Returns a string right-padded with the given pad string to a length 
of len characters. If the string is longer than len, the return value is 
shortened to len characters. E.g. "hi".rpad(4, '??') returns "hi??",  
"hi".rpad(1, '??') returns "h".
   
 
+
 
   
 {% highlight java %}
 STRING.fromBase64()
 {% endhighlight %}
   
-
+  
   
 Returns the base string decoded with base64, if string is null, 
returns null. E.g. "aGVsbG8gd29ybGQ=".fromBase64() returns "hello world".
   
 
+
+
+  
+{% highlight java %}
 
 Review comment:
   Add the corresponding docs for scala.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205526614
 
 

 ##
 File path: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/expressions/ScalarFunctionsTest.scala
 ##
 @@ -472,6 +472,40 @@ class ScalarFunctionsTest extends ScalarTypesTestBase {
   "null")
   }
 
+  @Test
+  def testToBase64(): Unit = {
+testAllApis(
+  'f0.toBase64(),
+  "f0.toBase64()",
+  "TO_BASE64(f0)",
+  "VGhpcyBpcyBhIHRlc3QgU3RyaW5nLg==")
+
+testAllApis(
+  'f8.toBase64(),
+  "f8.toBase64()",
+  "TO_BASE64(f8)",
+  "IFRoaXMgaXMgYSB0ZXN0IFN0cmluZy4g")
+
+//null test
+testAllApis(
+  'f33.toBase64(),
+  "f33.toBase64()",
+  "TO_BASE64(f33)",
+  "null")
+
+testAllApis(
 
 Review comment:
   This is actually not a test case for null.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205522776
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -2474,17 +2474,30 @@ STRING.rpad(len INT, pad STRING)
 Returns a string right-padded with the given pad string to a length 
of len characters. If the string is longer than len, the return value is 
shortened to len characters. E.g. "hi".rpad(4, '??') returns "hi??",  
"hi".rpad(1, '??') returns "h".
   
 
+
 
   
 {% highlight java %}
 STRING.fromBase64()
 {% endhighlight %}
   
-
+  
   
 Returns the base string decoded with base64, if string is null, 
returns null. E.g. "aGVsbG8gd29ybGQ=".fromBase64() returns "hello world".
   
 
+
+
+  
+{% highlight java %}
+STRING.toBase64()
+{% endhighlight %}
+  
+
+  
+Returns the string text encoded with BASE64. E.g. "hello 
world".toBase64() returns "aGVsbG8gd29ybGQ=".
 
 Review comment:
   Returns the base64-encoded result of STRING. If STRING is NULL, returns NULL.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205522289
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -2474,17 +2474,30 @@ STRING.rpad(len INT, pad STRING)
 Returns a string right-padded with the given pad string to a length 
of len characters. If the string is longer than len, the return value is 
shortened to len characters. E.g. "hi".rpad(4, '??') returns "hi??",  
"hi".rpad(1, '??') returns "h".
   
 
+
 
   
 {% highlight java %}
 STRING.fromBase64()
 {% endhighlight %}
   
-
+  
 
 Review comment:
   Try to avoid meaningless updates like that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 function for table/sql API

2018-07-26 Thread GitBox
xccui commented on a change in pull request #6390: [FLINK-9915] Add TO_BASE64 
function for table/sql API
URL: https://github.com/apache/flink/pull/6390#discussion_r205522908
 
 

 ##
 File path: docs/dev/table/sql.md
 ##
 @@ -1848,6 +1848,17 @@ FROM_BASE64(text string)
   
 Returns the base string decoded with base64, if text is NULL, 
returns NULL. E.g. FROM_BASE64('aGVsbG8gd29ybGQ=') returns 
hello world.
   
+  
+
+
+  
+{% highlight text %}
+TO_BASE64(text string)
 
 Review comment:
   TO_BASE64(string)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fhueske commented on issue #6423: [FLINK-9935] [table] Fix incorrect group field access in batch window combiner.

2018-07-26 Thread GitBox
fhueske commented on issue #6423: [FLINK-9935] [table] Fix incorrect group 
field access in batch window combiner.
URL: https://github.com/apache/flink/pull/6423#issuecomment-408155116
 
 
   Merged


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-9935) Batch Table API: grouping by window and attribute causes java.lang.ClassCastException:

2018-07-26 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-9935.

   Resolution: Fixed
Fix Version/s: 1.7.0
   1.6.0
   1.5.3
   1.4.3

Fixed for 1.7.0 with 63e84ad6532aff561d3abbd0dbadd420f836b28f
Fixed for 1.6.0 with 80964d27ad3518c1df02c8f8aa8be5bee22490b4
Fixed for 1.5.3 with 9dee15b780470eb4e60e07ddd82f0b280faa875e
Fixed for 1.4.3 with c5223fdecdc8ef5889609bebfd7e259600a2179a

> Batch Table API: grouping by window and attribute causes 
> java.lang.ClassCastException:
> --
>
> Key: FLINK-9935
> URL: https://issues.apache.org/jira/browse/FLINK-9935
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.2, 1.5.1, 1.6.0, 1.7.0
>Reporter: Roman Wozniak
>Assignee: Fabian Hueske
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.4.3, 1.5.3, 1.6.0, 1.7.0
>
>
>  Grouping by window AND some other attribute(s) seems broken. Test case 
> attached:
> {code}
> class BatchStatisticsIntegrationTest extends FlatSpec with Matchers {
>   trait BatchContext {
> implicit lazy val env: ExecutionEnvironment = 
> ExecutionEnvironment.getExecutionEnvironment
> implicit val tableEnv: BatchTableEnvironment = 
> TableEnvironment.getTableEnvironment(env)
> val data = Seq(
>   (1532424567000L, "id1", "location1"),
>   (1532424567000L, "id2", "location1"),
>   (1532424567000L, "id3", "location1"),
>   (1532424568000L, "id1", "location2"),
>   (1532424568000L, "id2", "location3")
> )
> val rawDataSet: DataSet[(Long, String, String)] = env.fromCollection(data)
> val table: Table = tableEnv.fromDataSet(rawDataSet, 'rowtime, 'id, 
> 'location)
>   }
>   it should "be possible to run Table API queries with grouping by tumble 
> window and column(s) on batch data" in new BatchContext {
> val results = table
>   .window(Tumble over 1.second on 'rowtime as 'w)
>   .groupBy('w, 'location)
>   .select(
> 'w.start.cast(Types.LONG),
> 'w.end.cast(Types.LONG),
> 'location,
> 'id.count
>   )
>   .toDataSet[(Long, Long, String, Long)]
>   .collect()
> results should contain theSameElementsAs Seq(
>   (1532424567000L, 1532424568000L, "location1", 3L),
>   (1532424568000L, 1532424569000L, "location2", 1L),
>   (1532424568000L, 1532424569000L, "location3", 1L)
> )
>   }
> }
> {code}
> It seems like during execution time, the 'rowtime attribute replaces 
> 'location and that causes ClassCastException.
> {code:java}
> [info]   Cause: java.lang.ClassCastException: java.lang.Long cannot be cast 
> to java.lang.String
> [info]   at 
> org.apache.flink.api.common.typeutils.base.StringSerializer.serialize(StringSerializer.java:28)
> [info]   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.serialize(RowSerializer.java:160)
> [info]   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.serialize(RowSerializer.java:46)
> [info]   at 
> org.apache.flink.runtime.plugable.SerializationDelegate.write(SerializationDelegate.java:54)
> [info]   at 
> org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializer.addRecord(SpanningRecordSerializer.java:88)
> [info]   at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:129)
> [info]   at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:105)
> [info]   at 
> org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65)
> [info]   at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
> [info]   at 
> org.apache.flink.api.java.operators.translation.RichCombineToGroupCombineWrapper.combine(RichCombineToGroupCombineWrapper.java:52)
> {code}
> Here is some debug information that I was able to get. So, field serializers 
> don't match the type of Row fields:
> {code}
> this.instance = {Row@68451} "1532424567000,(3),1532424567000"
>  fields = {Object[3]@68461} 
>   0 = {Long@68462} 1532424567000
>   1 = {CountAccumulator@68463} "(3)"
>   2 = {Long@68462} 1532424567000
> this.serializer = {RowSerializer@68452} 
>  fieldSerializers = {TypeSerializer[3]@68455} 
>   0 = {StringSerializer@68458} 
>   1 = {TupleSerializer@68459} 
>   2 = {LongSerializer@68460} 
>  arity = 3
>  nullMask = {boolean[3]@68457} 
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9935) Batch Table API: grouping by window and attribute causes java.lang.ClassCastException:

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558520#comment-16558520
 ] 

ASF GitHub Bot commented on FLINK-9935:
---

fhueske closed pull request #6423: [FLINK-9935] [table] Fix incorrect group 
field access in batch window combiner.
URL: https://github.com/apache/flink/pull/6423
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateUtil.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateUtil.scala
index 7ce44a6c834..a3c9c1e3f6a 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateUtil.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateUtil.scala
@@ -586,7 +586,7 @@ object AggregateUtil {
   isDistinctAggs,
   isStateBackedDataViews = false,
   partialResults = true,
-  groupings,
+  groupings.indices.toArray,
   Some(aggregates.indices.map(_ + groupings.length).toArray),
   outputType.getFieldCount,
   needRetract,
diff --git 
a/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/batch/table/GroupWindowITCase.scala
 
b/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/batch/table/GroupWindowITCase.scala
index 3d9223e0953..2c984c16070 100644
--- 
a/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/batch/table/GroupWindowITCase.scala
+++ 
b/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/batch/table/GroupWindowITCase.scala
@@ -97,6 +97,7 @@ class GroupWindowITCase(
 val table = env
   .fromCollection(data)
   .toTable(tEnv, 'long, 'int, 'double, 'float, 'bigdec, 'string)
+  .select('int, 'long, 'string) // keep this select to enforce that the 
'string key comes last
 
 val windowedTable = table
   .window(Tumble over 5.milli on 'long as 'w)
@@ -271,6 +272,7 @@ class GroupWindowITCase(
 val table = env
   .fromCollection(data)
   .toTable(tEnv, 'long, 'int, 'double, 'float, 'bigdec, 'string)
+  .select('int, 'long, 'string) // keep this select to enforce that the 
'string key comes last
 
 val windowedTable = table
   .window(Slide over 10.milli every 5.milli on 'long as 'w)


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Batch Table API: grouping by window and attribute causes 
> java.lang.ClassCastException:
> --
>
> Key: FLINK-9935
> URL: https://issues.apache.org/jira/browse/FLINK-9935
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.2, 1.5.1, 1.6.0, 1.7.0
>Reporter: Roman Wozniak
>Assignee: Fabian Hueske
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.4.3, 1.5.3, 1.6.0, 1.7.0
>
>
>  Grouping by window AND some other attribute(s) seems broken. Test case 
> attached:
> {code}
> class BatchStatisticsIntegrationTest extends FlatSpec with Matchers {
>   trait BatchContext {
> implicit lazy val env: ExecutionEnvironment = 
> ExecutionEnvironment.getExecutionEnvironment
> implicit val tableEnv: BatchTableEnvironment = 
> TableEnvironment.getTableEnvironment(env)
> val data = Seq(
>   (1532424567000L, "id1", "location1"),
>   (1532424567000L, "id2", "location1"),
>   (1532424567000L, "id3", "location1"),
>   (1532424568000L, "id1", "location2"),
>   (1532424568000L, "id2", "location3")
> )
> val rawDataSet: DataSet[(Long, String, String)] = env.fromCollection(data)
> val table: Table = tableEnv.fromDataSet(rawDataSet, 'rowtime, 'id, 
> 'location)
>   }
>   it should "be possible to run Table API queries with grouping by tumble 
> window and column(s) on batch data" in new BatchContext {
> val results = table
>   .window(Tumble over 1.second on 'rowtime as 'w)
>   .groupBy('w, 'location)
>   .select(
> 'w.start.cast(Types.LONG),
> 'w.end.cast(Types.LONG),
> 'location,
> 'id.count
>   )
>   .toDataSet[(Long, Long, String, Long)]
>   .collect()
> results should contain theSameElements

[jira] [Commented] (FLINK-9935) Batch Table API: grouping by window and attribute causes java.lang.ClassCastException:

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558519#comment-16558519
 ] 

ASF GitHub Bot commented on FLINK-9935:
---

fhueske commented on issue #6423: [FLINK-9935] [table] Fix incorrect group 
field access in batch window combiner.
URL: https://github.com/apache/flink/pull/6423#issuecomment-408155116
 
 
   Merged


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Batch Table API: grouping by window and attribute causes 
> java.lang.ClassCastException:
> --
>
> Key: FLINK-9935
> URL: https://issues.apache.org/jira/browse/FLINK-9935
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.2, 1.5.1, 1.6.0, 1.7.0
>Reporter: Roman Wozniak
>Assignee: Fabian Hueske
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.4.3, 1.5.3, 1.6.0, 1.7.0
>
>
>  Grouping by window AND some other attribute(s) seems broken. Test case 
> attached:
> {code}
> class BatchStatisticsIntegrationTest extends FlatSpec with Matchers {
>   trait BatchContext {
> implicit lazy val env: ExecutionEnvironment = 
> ExecutionEnvironment.getExecutionEnvironment
> implicit val tableEnv: BatchTableEnvironment = 
> TableEnvironment.getTableEnvironment(env)
> val data = Seq(
>   (1532424567000L, "id1", "location1"),
>   (1532424567000L, "id2", "location1"),
>   (1532424567000L, "id3", "location1"),
>   (1532424568000L, "id1", "location2"),
>   (1532424568000L, "id2", "location3")
> )
> val rawDataSet: DataSet[(Long, String, String)] = env.fromCollection(data)
> val table: Table = tableEnv.fromDataSet(rawDataSet, 'rowtime, 'id, 
> 'location)
>   }
>   it should "be possible to run Table API queries with grouping by tumble 
> window and column(s) on batch data" in new BatchContext {
> val results = table
>   .window(Tumble over 1.second on 'rowtime as 'w)
>   .groupBy('w, 'location)
>   .select(
> 'w.start.cast(Types.LONG),
> 'w.end.cast(Types.LONG),
> 'location,
> 'id.count
>   )
>   .toDataSet[(Long, Long, String, Long)]
>   .collect()
> results should contain theSameElementsAs Seq(
>   (1532424567000L, 1532424568000L, "location1", 3L),
>   (1532424568000L, 1532424569000L, "location2", 1L),
>   (1532424568000L, 1532424569000L, "location3", 1L)
> )
>   }
> }
> {code}
> It seems like during execution time, the 'rowtime attribute replaces 
> 'location and that causes ClassCastException.
> {code:java}
> [info]   Cause: java.lang.ClassCastException: java.lang.Long cannot be cast 
> to java.lang.String
> [info]   at 
> org.apache.flink.api.common.typeutils.base.StringSerializer.serialize(StringSerializer.java:28)
> [info]   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.serialize(RowSerializer.java:160)
> [info]   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.serialize(RowSerializer.java:46)
> [info]   at 
> org.apache.flink.runtime.plugable.SerializationDelegate.write(SerializationDelegate.java:54)
> [info]   at 
> org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializer.addRecord(SpanningRecordSerializer.java:88)
> [info]   at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:129)
> [info]   at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:105)
> [info]   at 
> org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65)
> [info]   at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
> [info]   at 
> org.apache.flink.api.java.operators.translation.RichCombineToGroupCombineWrapper.combine(RichCombineToGroupCombineWrapper.java:52)
> {code}
> Here is some debug information that I was able to get. So, field serializers 
> don't match the type of Row fields:
> {code}
> this.instance = {Row@68451} "1532424567000,(3),1532424567000"
>  fields = {Object[3]@68461} 
>   0 = {Long@68462} 1532424567000
>   1 = {CountAccumulator@68463} "(3)"
>   2 = {Long@68462} 1532424567000
> this.serializer = {RowSerializer@68452} 
>  fieldSerializers = {TypeSerializer[3]@68455} 
>   0 = {StringSerializer@68458} 
>   1 = {TupleSerializer@68459} 
>   2 = {LongSerializer@68460} 
>  arity = 3
>  nullMask = {boolean[3]@68457} 
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7

[GitHub] fhueske closed pull request #6423: [FLINK-9935] [table] Fix incorrect group field access in batch window combiner.

2018-07-26 Thread GitBox
fhueske closed pull request #6423: [FLINK-9935] [table] Fix incorrect group 
field access in batch window combiner.
URL: https://github.com/apache/flink/pull/6423
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateUtil.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateUtil.scala
index 7ce44a6c834..a3c9c1e3f6a 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateUtil.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateUtil.scala
@@ -586,7 +586,7 @@ object AggregateUtil {
   isDistinctAggs,
   isStateBackedDataViews = false,
   partialResults = true,
-  groupings,
+  groupings.indices.toArray,
   Some(aggregates.indices.map(_ + groupings.length).toArray),
   outputType.getFieldCount,
   needRetract,
diff --git 
a/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/batch/table/GroupWindowITCase.scala
 
b/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/batch/table/GroupWindowITCase.scala
index 3d9223e0953..2c984c16070 100644
--- 
a/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/batch/table/GroupWindowITCase.scala
+++ 
b/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/batch/table/GroupWindowITCase.scala
@@ -97,6 +97,7 @@ class GroupWindowITCase(
 val table = env
   .fromCollection(data)
   .toTable(tEnv, 'long, 'int, 'double, 'float, 'bigdec, 'string)
+  .select('int, 'long, 'string) // keep this select to enforce that the 
'string key comes last
 
 val windowedTable = table
   .window(Tumble over 5.milli on 'long as 'w)
@@ -271,6 +272,7 @@ class GroupWindowITCase(
 val table = env
   .fromCollection(data)
   .toTable(tEnv, 'long, 'int, 'double, 'float, 'bigdec, 'string)
+  .select('int, 'long, 'string) // keep this select to enforce that the 
'string key comes last
 
 val windowedTable = table
   .window(Slide over 10.milli every 5.milli on 'long as 'w)


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-9951) Update scm developerConnection

2018-07-26 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-9951.
---
   Resolution: Fixed
Fix Version/s: (was: 1.3.4)

master: fef9300c80fbd3c0215cd6810ed67d6ddeb6ae60
1.6: 27c45402a4c00f8ca70861277728fbf0d7b33361
1.5: ce149bac7e821e09453c5fb8954f687bc73e14fa
1.4: 2428b3a8682521da02ee342955fc18c2ea29aa42

> Update scm developerConnection
> --
>
> Key: FLINK-9951
> URL: https://issues.apache.org/jira/browse/FLINK-9951
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 1.3.3, 1.4.2, 1.5.1, 1.6.0, 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.4.3, 1.5.3, 1.6.0, 1.7.0
>
>
> The developer connection must be updated to point to the update remote.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9951) Update scm developerConnection

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558479#comment-16558479
 ] 

ASF GitHub Bot commented on FLINK-9951:
---

zentol closed pull request #6420: [FLINK-9951][build] Update scm 
developerConnection
URL: https://github.com/apache/flink/pull/6420
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/pom.xml b/pom.xml
index 3540215b29f..9f9b5a450b4 100644
--- a/pom.xml
+++ b/pom.xml
@@ -46,7 +46,7 @@ under the License.

https://github.com/apache/flink
g...@github.com:apache/flink.git
-   
scm:git:https://git-wip-us.apache.org/repos/asf/flink.git
+   
scm:git:https://gitbox.apache.org/repos/asf/flink.git

 



 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Update scm developerConnection
> --
>
> Key: FLINK-9951
> URL: https://issues.apache.org/jira/browse/FLINK-9951
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 1.3.3, 1.4.2, 1.5.1, 1.6.0, 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.3.4, 1.4.3, 1.5.3, 1.6.0, 1.7.0
>
>
> The developer connection must be updated to point to the update remote.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol closed pull request #6420: [FLINK-9951][build] Update scm developerConnection

2018-07-26 Thread GitBox
zentol closed pull request #6420: [FLINK-9951][build] Update scm 
developerConnection
URL: https://github.com/apache/flink/pull/6420
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/pom.xml b/pom.xml
index 3540215b29f..9f9b5a450b4 100644
--- a/pom.xml
+++ b/pom.xml
@@ -46,7 +46,7 @@ under the License.

https://github.com/apache/flink
g...@github.com:apache/flink.git
-   
scm:git:https://git-wip-us.apache.org/repos/asf/flink.git
+   
scm:git:https://gitbox.apache.org/repos/asf/flink.git

 



 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9970) Add ASCII function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558472#comment-16558472
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

twalthr commented on a change in pull request #6432: [FLINK-9970] Add ASCII 
function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r205511437
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -2477,6 +2477,17 @@ STRING.rpad(len INT, pad STRING)
 
   
 {% highlight java %}
+STRING.ascii()
+{% endhighlight %}
+  
+
+  
+Returns the numeric value of the leftmost character of the string 
str. Returns 0 if str is the empty string. Returns NULL if str is NULL. E.g. 
"THIS".fromBase64() returns "84".
 
 Review comment:
   Base64 does not belong here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : 
> https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9970) Add ASCII function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558470#comment-16558470
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

twalthr commented on a change in pull request #6432: [FLINK-9970] Add ASCII 
function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r205511358
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -2477,6 +2477,17 @@ STRING.rpad(len INT, pad STRING)
 
   
 {% highlight java %}
+STRING.ascii()
 
 Review comment:
   Also add Scala docs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : 
> https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9970) Add ASCII function for table/sql API

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558471#comment-16558471
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

twalthr commented on a change in pull request #6432: [FLINK-9970] Add ASCII 
function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r205511293
 
 

 ##
 File path: docs/dev/table/sql.md
 ##
 @@ -1842,6 +1842,16 @@ RPAD(text string, len integer, pad string)
 
   
 {% highlight text %}
+ASCII(text string)
+{% endhighlight %}
+  
+  
+Returns the numeric value of the leftmost character of the string 
str. Returns 0 if str is the empty string. Returns NULL if str is NULL. E.g. 
ASCII('THIS') returns 84.
 
 Review comment:
   Please do not copy documentation from other projects. Especially if other 
projects are not Apache licensed. Even worse than copying is copying but not 
adjusting variable names. You use `text string` in the paramter but `str` in 
the description.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : 
> https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >