[GitHub] [flink] MartijnVisser commented on a change in pull request #17930: [FLINK-24326][docs][connectors] Update elasticsearch sink pages with new unified sink API (FLIP-143) implementation

2021-12-01 Thread GitBox


MartijnVisser commented on a change in pull request #17930:
URL: https://github.com/apache/flink/pull/17930#discussion_r759937040



##
File path: docs/content/docs/connectors/datastream/elasticsearch.md
##
@@ -43,15 +43,11 @@ of the Elasticsearch installation:
   
   
 
-5.x
-{{< artifact flink-connector-elasticsearch5 >}}
-
-
-6.x
+<= 6.3.1

Review comment:
   That makes sense. Let's just merge the situation "as is" and check 
separately if we can easily bump both connectors to the latest version




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MartijnVisser commented on a change in pull request #17930: [FLINK-24326][docs][connectors] Update elasticsearch sink pages with new unified sink API (FLIP-143) implementation

2021-12-01 Thread GitBox


MartijnVisser commented on a change in pull request #17930:
URL: https://github.com/apache/flink/pull/17930#discussion_r759937660



##
File path: docs/content/docs/connectors/datastream/elasticsearch.md
##
@@ -65,240 +61,90 @@ about how to package the program with the libraries for 
cluster execution.
 
 Instructions for setting up an Elasticsearch cluster can be found
 
[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html).
-Make sure to set and remember a cluster name. This must be set when
-creating an `ElasticsearchSink` for requesting document actions against your 
cluster.
 
 ## Elasticsearch Sink
 
-The `ElasticsearchSink` uses a `TransportClient` (before 6.x) or 
`RestHighLevelClient` (starting with 6.x) to communicate with an
-Elasticsearch cluster.
-
 The example below shows how to configure and create a sink:
 
 {{< tabs "51732edd-4218-470e-adad-b1ebb4021ae4" >}}
-{{< tab "java, 5.x" >}}
-```java
-import org.apache.flink.api.common.functions.RuntimeContext;
-import org.apache.flink.streaming.api.datastream.DataStream;
-import 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkFunction;
-import org.apache.flink.streaming.connectors.elasticsearch.RequestIndexer;
-import org.apache.flink.streaming.connectors.elasticsearch5.ElasticsearchSink;
-
-import org.elasticsearch.action.index.IndexRequest;
-import org.elasticsearch.client.Requests;
-
-import java.net.InetAddress;
-import java.net.InetSocketAddress;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-DataStream input = ...;
-
-Map config = new HashMap<>();
-config.put("cluster.name", "my-cluster-name");
-// This instructs the sink to emit after every element, otherwise they would 
be buffered
-config.put("bulk.flush.max.actions", "1");
-
-List transportAddresses = new ArrayList<>();
-transportAddresses.add(new 
InetSocketAddress(InetAddress.getByName("127.0.0.1"), 9300));
-transportAddresses.add(new 
InetSocketAddress(InetAddress.getByName("10.2.3.1"), 9300));
-
-input.addSink(new ElasticsearchSink<>(config, transportAddresses, new 
ElasticsearchSinkFunction() {
-public IndexRequest createIndexRequest(String element) {
-Map json = new HashMap<>();
-json.put("data", element);
-
-return Requests.indexRequest()
-.index("my-index")
-.type("my-type")
-.source(json);
-}
-
-@Override
-public void process(String element, RuntimeContext ctx, RequestIndexer 
indexer) {
-indexer.add(createIndexRequest(element));
-}
-}));```
-{{< /tab >}}
 {{< tab "java, Elasticsearch 6.x and above" >}}

Review comment:
   Yeah I think it's better to mention on one place which versions we 
support and then in the example list it as ES6 and ES7




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul merged pull request #17947: [FLINK-24596][kafka] Fix buffered KafkaUpsert sink

2021-12-01 Thread GitBox


fapaul merged pull request #17947:
URL: https://github.com/apache/flink/pull/17947


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MartijnVisser commented on a change in pull request #17941: [FLINK-25092][tests][elasticsearch] Refactor test to use artifact cacher

2021-12-01 Thread GitBox


MartijnVisser commented on a change in pull request #17941:
URL: https://github.com/apache/flink/pull/17941#discussion_r759939176



##
File path: flink-end-to-end-tests/test-scripts/elasticsearch-common.sh
##
@@ -37,20 +37,10 @@ function setup_elasticsearch {
 local elasticsearchDir=$TEST_DATA_DIR/elasticsearch
 mkdir -p $elasticsearchDir
 echo "Downloading Elasticsearch from $downloadUrl ..."
-for i in {1..10};
-do
-wget "$downloadUrl" -O $TEST_DATA_DIR/elasticsearch.tar.gz
-if [ $? -eq 0 ]; then
-echo "Download successful."
-echo "Extracting..."
-tar xzf $TEST_DATA_DIR/elasticsearch.tar.gz -C $elasticsearchDir 
--strip-components=1
-if [ $? -eq 0 ]; then
-break
-fi
-fi
-echo "Attempt $i failed."
-sleep 5
-done
+retry_times_with_exponential_backoff 10 wget "$downloadUrl" -O 
$TEST_DATA_DIR/elasticsearch.tar.gz
+echo "Download successful."
+echo "Extracting..."
+tar xzf $TEST_DATA_DIR/elasticsearch.tar.gz -C $elasticsearchDir 
--strip-components=1

Review comment:
   Agreed, let's do that in a separate ticket and PR to update the 
`get_artifact` function to check if the download has been completed 
successfully. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24596) Bugs in sink.buffer-flush before upsert-kafka

2021-12-01 Thread Fabian Paul (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451593#comment-17451593
 ] 

Fabian Paul commented on FLINK-24596:
-

Fixed on release-1.14: 66cf720388c5697d5bf98c10b3dd348a0704a710

> Bugs in sink.buffer-flush before upsert-kafka
> -
>
> Key: FLINK-24596
> URL: https://issues.apache.org/jira/browse/FLINK-24596
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Jingsong Lee
>Assignee: Fabian Paul
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.14.1
>
>
> There is no ITCase for sink.buffer-flush before upsert-kafka. We should add 
> it.
> FLINK-23735 brings some bugs:
> * SinkBufferFlushMode bufferFlushMode not Serializable
> * Function valueCopyFunction not Serializable
> * Planner dose not support DataStreamProvider with new Sink



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-24596) Bugs in sink.buffer-flush before upsert-kafka

2021-12-01 Thread Fabian Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Paul resolved FLINK-24596.
-
Resolution: Fixed

> Bugs in sink.buffer-flush before upsert-kafka
> -
>
> Key: FLINK-24596
> URL: https://issues.apache.org/jira/browse/FLINK-24596
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Jingsong Lee
>Assignee: Fabian Paul
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.14.1
>
>
> There is no ITCase for sink.buffer-flush before upsert-kafka. We should add 
> it.
> FLINK-23735 brings some bugs:
> * SinkBufferFlushMode bufferFlushMode not Serializable
> * Function valueCopyFunction not Serializable
> * Planner dose not support DataStreamProvider with new Sink



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] MartijnVisser commented on a change in pull request #17941: [FLINK-25092][tests][elasticsearch] Refactor test to use artifact cacher

2021-12-01 Thread GitBox


MartijnVisser commented on a change in pull request #17941:
URL: https://github.com/apache/flink/pull/17941#discussion_r759941289



##
File path: flink-end-to-end-tests/test-scripts/elasticsearch-common.sh
##
@@ -37,20 +37,10 @@ function setup_elasticsearch {
 local elasticsearchDir=$TEST_DATA_DIR/elasticsearch
 mkdir -p $elasticsearchDir
 echo "Downloading Elasticsearch from $downloadUrl ..."
-for i in {1..10};
-do
-wget "$downloadUrl" -O $TEST_DATA_DIR/elasticsearch.tar.gz
-if [ $? -eq 0 ]; then
-echo "Download successful."
-echo "Extracting..."
-tar xzf $TEST_DATA_DIR/elasticsearch.tar.gz -C $elasticsearchDir 
--strip-components=1
-if [ $? -eq 0 ]; then
-break
-fi
-fi
-echo "Attempt $i failed."
-sleep 5
-done
+retry_times_with_exponential_backoff 10 wget "$downloadUrl" -O 
$TEST_DATA_DIR/elasticsearch.tar.gz
+echo "Download successful."
+echo "Extracting..."
+tar xzf $TEST_DATA_DIR/elasticsearch.tar.gz -C $elasticsearchDir 
--strip-components=1

Review comment:
   I've created https://issues.apache.org/jira/browse/FLINK-25125 for this




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25125) Verify that downloaded artifact is not corrupt

2021-12-01 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-25125:
--

 Summary: Verify that downloaded artifact is not corrupt
 Key: FLINK-25125
 URL: https://issues.apache.org/jira/browse/FLINK-25125
 Project: Flink
  Issue Type: Improvement
  Components: Test Infrastructure
Reporter: Martijn Visser


The bash based e2e tests use the `get_artifact` function to either grab an 
artifact from the cache or download it for the purpose of testing. We should 
add a verification that the downloaded artifact is not corrupt.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25126) when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail

2021-12-01 Thread eric yu (Jira)
eric yu created FLINK-25126:
---

 Summary: when SET 'execution.runtime-mode' = 'batch' and 
'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail
 Key: FLINK-25126
 URL: https://issues.apache.org/jira/browse/FLINK-25126
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.14.0
 Environment: SET 'execution.runtime-mode' = 'batch';

 CREATE TABLE ka15 (
     name String,
     cnt bigint
 ) WITH (
   'connector' = 'kafka',
   'topic' = 'shifang',
  'properties.bootstrap.servers' = 'flinkx1:9092',
  'properties.transaction.timeout.ms' = '80',
  'properties.max.block.ms' = '30',
   'value.format' = 'json',
   'sink.parallelism' = '2',
   'sink.delivery-guarantee' = 'exactly-once',
   'sink.transactional-id-prefix' = 'dtstack');

 

insert into ka15 SELECT
  name,
  cnt
FROM
  (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
NameTable(name,cnt);
Reporter: eric yu


flinksql task submitted by sql client will failed:
 
Caused by: java.lang.IllegalStateException
at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
at 
org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
at 
org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
at 
org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
at java.util.Optional.orElseGet(Optional.java:267)
at 
org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
... 14 more
 
 
i found the reason why  kafka commit failed, when downstream operator 
CommitterOperator was commiting transaction, the upstream  operator 
SinkOperator has closed , it will abort the transaction which  is committing by 
CommitterOperator, this is occurs when execution.runtime-mode is batch



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17971: Update kafka.md

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17971:
URL: https://github.com/apache/flink/pull/17971#issuecomment-983331984


   
   ## CI report:
   
   * 8ae9baf7cb758e0df2e70d399acc08a13a651aea Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27330)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


twalthr commented on a change in pull request #17932:
URL: https://github.com/apache/flink/pull/17932#discussion_r759941929



##
File path: 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/FlinkAssertions.java
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.core.testutils;
+
+import org.assertj.core.api.AbstractThrowableAssert;
+import org.assertj.core.api.AssertFactory;
+import org.assertj.core.api.Assertions;
+import org.assertj.core.api.InstanceOfAssertFactory;
+import org.assertj.core.api.ListAssert;
+
+import java.util.function.Function;
+import java.util.stream.Stream;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Some reusable assertions and utilities for AssertJ. */
+public final class FlinkAssertions {
+
+private FlinkAssertions() {}
+
+/** @see #chainOfCauses(Throwable) */
+@SuppressWarnings({"rawtypes", "unused"})
+public static final InstanceOfAssertFactory> 
STREAM_THROWABLE =
+new InstanceOfAssertFactory<>(Stream.class, 
Assertions::assertThat);
+
+/**
+ * Shorthand to assert chain of causes. Same as: 
+ * assertThat(throwable)
+ * .extracting(FlinkAssertions::chainOfCauses, 
FlinkAssertions.STREAM_THROWABLE)
+ * 
+ */
+public ListAssert assertThatChainOfCauses(Throwable root) {

Review comment:
   method is not static




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25126) when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail

2021-12-01 Thread eric yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eric yu updated FLINK-25126:

Description: 
flinksql task submitted by sql client will failed,

this is the sql :

SET 'execution.runtime-mode' = 'batch';

 CREATE TABLE ka15 (
     name String,
     cnt bigint
 ) WITH (
   'connector' = 'kafka',
   'topic' = 'shifang',
  'properties.bootstrap.servers' = 'flinkx1:9092',
  'properties.transaction.timeout.ms' = '80',
  'properties.max.block.ms' = '30',
   'value.format' = 'json',
   'sink.parallelism' = '2',
   'sink.delivery-guarantee' = 'exactly-once',
   'sink.transactional-id-prefix' = 'dtstack');

 

insert into ka15 SELECT
  name,
  cnt
FROM
  (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
NameTable(name,cnt);

 

this is the error:

Caused by: java.lang.IllegalStateException
at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
at 
org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
at 
org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
at 
org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
at java.util.Optional.orElseGet(Optional.java:267)
at 
org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
... 14 more
 
 
i found the reason why  kafka commit failed, when downstream operator 
CommitterOperator was commiting transaction, the upstream  operator 
SinkOperator has closed , it will abort the transaction which  is committing by 
CommitterOperator, this is occurs when execution.runtime-mode is batch

  was:
flinksql task submitted by sql client will failed:
 
Caused by: java.lang.IllegalStateException
at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
at 
org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
at 
org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
at 
org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
at java.util.Optional.orElseGet(Optional.java:267)
at 
org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
... 14 more
 
 
i found the reason why  kafka commit failed, when downstream operator 
CommitterOperator was commiting transaction, the upstream  operator 
SinkOperator has closed , it will abort the transaction which  is committing by 
CommitterOperator, this is occurs when execution.runtime-mode is batch


> when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 
> 'exactly-once',kafka conncetor will commit fail
> ---
>
> Key: FLINK-25126
> URL: https://issues.apache.org/jira/browse/FLINK-25126
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>Reporter: eric yu
>Priority: Major
>
> flinksql task submitted by sql client will failed,
> this is the sql :
> SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>  
> this is the error:
> Caused by: java.lang.IllegalStateException
> at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
> at 
> org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(

[GitHub] [flink] dawidwys commented on a change in pull request #17946: [FLINK-24919][runtime] Getting vertex only under synchronization in C…

2021-12-01 Thread GitBox


dawidwys commented on a change in pull request #17946:
URL: https://github.com/apache/flink/pull/17946#discussion_r759946453



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ExecutionAttemptMappingProvider.java
##
@@ -56,14 +56,16 @@ protected boolean removeEldestEntry(
 }
 
 public Optional getVertex(ExecutionAttemptID id) {
-if (!cachedTasksById.containsKey(id)) {
-cachedTasksById.putAll(getCurrentAttemptMappings());
+synchronized (cachedTasksById) {

Review comment:
   I am thinking if the entire caching here is not overly complicated. As 
far as I understand it, we're rebuilding the entire `cachedTasksById` on each 
miss. Wouldn't it make more sense to do so explicitly? What do you think about 
code as follows?
   
   ```
   public Optional getVertex(ExecutionAttemptID id) {
   ExecutionVertex vertex = cachedTasksById.get(id);
   if (vertex != null) {
   return Optional.of(vertex);
   } else {
   return updateAndGet(id);
   }
   }
   
   private synchronized Optional 
updateAndGet(ExecutionAttemptID id) {
   ExecutionVertex vertex;
   // might've been updated by another thread
   vertex = cachedTasksById.get(id);
   if (vertex != null) {
   return Optional.of(vertex);
   }
   cachedTasksById = getCurrentAttemptMappings();
   return Optional.ofNullable(cachedTasksById.get(id));
   }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17463: [Hotfix] Fix typos.

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17463:
URL: https://github.com/apache/flink/pull/17463#issuecomment-942081615


   
   ## CI report:
   
   * b294eeddfac3dac03224f6948f4f2f14351208f0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26950)
 
   * cdd1b82c0785dbf5f30135158aaae7ada87a8528 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr closed pull request #17906: [FLINK-25060][table-common] Replace DataType.projectFields with Projection type

2021-12-01 Thread GitBox


twalthr closed pull request #17906:
URL: https://github.com/apache/flink/pull/17906


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25060) Replace DataType.projectFields with Projection type

2021-12-01 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-25060.

Fix Version/s: 1.15.0
   Resolution: Fixed

Fixed in master: 30644a025acd2239c551adb68c0483d80358e07a

> Replace DataType.projectFields with Projection type
> ---
>
> Key: FLINK-25060
> URL: https://issues.apache.org/jira/browse/FLINK-25060
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Francesco Guardiani
>Assignee: Francesco Guardiani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> FLINK-24399 introduced new methods to perform data types projections in 
> DataType. Note: no release included such changes.
> FLINK-24776 introduced a new, more powerful, type to perform operations on 
> projections, that is project types, but also difference and complement.
> In spite of avoiding to provide different entrypoints for the same 
> functionality, we should cleanup the new methods introduced by FLINK-24399 
> and replace them with the new Projection type. We should also deprecate the 
> functions in DataTypeUtils.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25126) when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail

2021-12-01 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451603#comment-17451603
 ] 

Jingsong Lee commented on FLINK-25126:
--

+1

> when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 
> 'exactly-once',kafka conncetor will commit fail
> ---
>
> Key: FLINK-25126
> URL: https://issues.apache.org/jira/browse/FLINK-25126
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>Reporter: eric yu
>Priority: Major
>
> flinksql task submitted by sql client will failed,
> this is the sql :
> SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>  
> this is the error:
> Caused by: java.lang.IllegalStateException
> at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
> at 
> org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
> at java.util.Optional.orElseGet(Optional.java:267)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
> ... 14 more
>  
>  
> i found the reason why  kafka commit failed, when downstream operator 
> CommitterOperator was commiting transaction, the upstream  operator 
> SinkOperator has closed , it will abort the transaction which  is committing 
> by CommitterOperator, this is occurs when execution.runtime-mode is batch



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17463: [Hotfix] Fix typos.

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17463:
URL: https://github.com/apache/flink/pull/17463#issuecomment-942081615


   
   ## CI report:
   
   * b294eeddfac3dac03224f6948f4f2f14351208f0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26950)
 
   * cdd1b82c0785dbf5f30135158aaae7ada87a8528 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27334)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #17931: [FLINK-25075] Refactor PlannerExpressionParser, removing the reflection to instantiate it

2021-12-01 Thread GitBox


twalthr commented on a change in pull request #17931:
URL: https://github.com/apache/flink/pull/17931#discussion_r759950115



##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/delegation/ExpressionParserFactory.java
##
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.delegation;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.factories.Factory;
+import org.apache.flink.table.factories.FactoryUtil;
+
+/**
+ * Factory for {@link ExpressionParser}.
+ *
+ * @deprecated The Scala Expression DSL is deprecated

Review comment:
   It's actually the `Java String Expression DSL` but I will fix this while 
merging.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] (FLINK-25126) when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail

2021-12-01 Thread eric yu (Jira)


[ https://issues.apache.org/jira/browse/FLINK-25126 ]


eric yu deleted comment on FLINK-25126:
-

was (Author: JIRAUSER281032):
[~rmetzger] 

> when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 
> 'exactly-once',kafka conncetor will commit fail
> ---
>
> Key: FLINK-25126
> URL: https://issues.apache.org/jira/browse/FLINK-25126
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>Reporter: eric yu
>Priority: Major
>
> flinksql task submitted by sql client will failed,
> this is the sql :
> SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>  
> this is the error:
> Caused by: java.lang.IllegalStateException
> at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
> at 
> org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
> at java.util.Optional.orElseGet(Optional.java:267)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
> ... 14 more
>  
>  
> i found the reason why  kafka commit failed, when downstream operator 
> CommitterOperator was commiting transaction, the upstream  operator 
> SinkOperator has closed , it will abort the transaction which  is committing 
> by CommitterOperator, this is occurs when execution.runtime-mode is batch



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25126) when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail

2021-12-01 Thread eric yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451605#comment-17451605
 ] 

eric yu commented on FLINK-25126:
-

[~rmetzger] 

> when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 
> 'exactly-once',kafka conncetor will commit fail
> ---
>
> Key: FLINK-25126
> URL: https://issues.apache.org/jira/browse/FLINK-25126
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>Reporter: eric yu
>Priority: Major
>
> flinksql task submitted by sql client will failed,
> this is the sql :
> SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>  
> this is the error:
> Caused by: java.lang.IllegalStateException
> at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
> at 
> org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
> at java.util.Optional.orElseGet(Optional.java:267)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
> ... 14 more
>  
>  
> i found the reason why  kafka commit failed, when downstream operator 
> CommitterOperator was commiting transaction, the upstream  operator 
> SinkOperator has closed , it will abort the transaction which  is committing 
> by CommitterOperator, this is occurs when execution.runtime-mode is batch



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25126) when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail

2021-12-01 Thread eric yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451606#comment-17451606
 ] 

eric yu commented on FLINK-25126:
-

[~rmetzger] 

> when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 
> 'exactly-once',kafka conncetor will commit fail
> ---
>
> Key: FLINK-25126
> URL: https://issues.apache.org/jira/browse/FLINK-25126
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>Reporter: eric yu
>Priority: Major
>
> flinksql task submitted by sql client will failed,
> this is the sql :
> SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>  
> this is the error:
> Caused by: java.lang.IllegalStateException
> at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
> at 
> org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
> at java.util.Optional.orElseGet(Optional.java:267)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
> ... 14 more
>  
>  
> i found the reason why  kafka commit failed, when downstream operator 
> CommitterOperator was commiting transaction, the upstream  operator 
> SinkOperator has closed , it will abort the transaction which  is committing 
> by CommitterOperator, this is occurs when execution.runtime-mode is batch



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25126) when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail

2021-12-01 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451608#comment-17451608
 ] 

Jingsong Lee commented on FLINK-25126:
--

CC:  [~fpaul] 

> when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 
> 'exactly-once',kafka conncetor will commit fail
> ---
>
> Key: FLINK-25126
> URL: https://issues.apache.org/jira/browse/FLINK-25126
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>Reporter: eric yu
>Priority: Major
>
> flinksql task submitted by sql client will failed,
> this is the sql :
> SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>  
> this is the error:
> Caused by: java.lang.IllegalStateException
> at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
> at 
> org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
> at java.util.Optional.orElseGet(Optional.java:267)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
> ... 14 more
>  
>  
> i found the reason why  kafka commit failed, when downstream operator 
> CommitterOperator was commiting transaction, the upstream  operator 
> SinkOperator has closed , it will abort the transaction which  is committing 
> by CommitterOperator, this is occurs when execution.runtime-mode is batch



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on a change in pull request #17929: [FLINK-21467][docs] Clarify javadocs of Bounded(One/Multi)Input interfaces

2021-12-01 Thread GitBox


dawidwys commented on a change in pull request #17929:
URL: https://github.com/apache/flink/pull/17929#discussion_r759953306



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/BoundedMultiInput.java
##
@@ -19,13 +19,25 @@
 
 import org.apache.flink.annotation.PublicEvolving;
 
-/** Interface for the multi-input operators that can process EndOfInput event. 
*/
+/**
+ * Interface for multi-input operators that need to be notified about the 
logical/semantical end of
+ * input.
+ *
+ * WARNING: It is not safe to use this method to commit any 
transactions or other side
+ * effects! You can use this method to e.g. flush data buffered for the given 
input or implement an
+ * ordered reading from multiple inputs via {@link InputSelectable}.

Review comment:
   Makes sense.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] zhipeng93 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-12-01 Thread GitBox


zhipeng93 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r759953240



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/linear/LogisticRegression.java
##
@@ -0,0 +1,653 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.classification.linear;
+
+import org.apache.flink.api.common.functions.RichMapFunction;
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.PrimitiveArrayTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.base.DoubleComparator;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.typeutils.TupleTypeInfo;
+import org.apache.flink.iteration.DataStreamList;
+import org.apache.flink.iteration.IterationBody;
+import org.apache.flink.iteration.IterationBodyResult;
+import org.apache.flink.iteration.IterationConfig;
+import org.apache.flink.iteration.IterationConfig.OperatorLifeCycle;
+import org.apache.flink.iteration.IterationListener;
+import org.apache.flink.iteration.Iterations;
+import org.apache.flink.iteration.ReplayableDataStreamList;
+import org.apache.flink.iteration.operator.OperatorStateUtils;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.broadcast.BroadcastUtils;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.common.iteration.TerminateOnMaxIterOrTol;
+import org.apache.flink.ml.linalg.BLAS;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.runtime.state.StateInitializationContext;
+import org.apache.flink.runtime.state.StateSnapshotContext;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
+import org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator;
+import org.apache.flink.streaming.api.operators.BoundedMultiInput;
+import org.apache.flink.streaming.api.operators.BoundedOneInput;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.TwoInputStreamOperator;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.OutputTag;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.collections.IteratorUtils;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+
+/**
+ * This class implements methods to train a logistic regression model. For 
details, see
+ * https://en.wikipedia.org/wiki/Logistic_regression.
+ */
+public class LogisticRegression
+implements Estimator,
+LogisticRegressionParams {
+
+private Map, Object> paramMap = new HashMap<>();
+
+private static final OutputTag> MODEL_OUTPUT =
+new OutputTag>("MODEL_OUTPUT") {};
+
+public LogisticRegression() {
+ParamUtils.initializeMapWithDefaultValues(this.paramMap, this);
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+}
+
+public static LogisticRegression load(StreamExecutionEnvironment env, 
String path)
+throws IOException {
+return ReadWriteUtils.load

[jira] [Updated] (FLINK-25126) when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail

2021-12-01 Thread Fabian Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Paul updated FLINK-25126:

Priority: Blocker  (was: Major)

> when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 
> 'exactly-once',kafka conncetor will commit fail
> ---
>
> Key: FLINK-25126
> URL: https://issues.apache.org/jira/browse/FLINK-25126
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>Reporter: eric yu
>Priority: Blocker
>
> flinksql task submitted by sql client will failed,
> this is the sql :
> SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>  
> this is the error:
> Caused by: java.lang.IllegalStateException
> at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
> at 
> org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
> at java.util.Optional.orElseGet(Optional.java:267)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
> ... 14 more
>  
>  
> i found the reason why  kafka commit failed, when downstream operator 
> CommitterOperator was commiting transaction, the upstream  operator 
> SinkOperator has closed , it will abort the transaction which  is committing 
> by CommitterOperator, this is occurs when execution.runtime-mode is batch



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25126) when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail

2021-12-01 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-25126:
---
Fix Version/s: 1.14.1

> when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 
> 'exactly-once',kafka conncetor will commit fail
> ---
>
> Key: FLINK-25126
> URL: https://issues.apache.org/jira/browse/FLINK-25126
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>Reporter: eric yu
>Priority: Blocker
> Fix For: 1.14.1
>
>
> flinksql task submitted by sql client will failed,
> this is the sql :
> SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>  
> this is the error:
> Caused by: java.lang.IllegalStateException
> at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
> at 
> org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
> at java.util.Optional.orElseGet(Optional.java:267)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
> ... 14 more
>  
>  
> i found the reason why  kafka commit failed, when downstream operator 
> CommitterOperator was commiting transaction, the upstream  operator 
> SinkOperator has closed , it will abort the transaction which  is committing 
> by CommitterOperator, this is occurs when execution.runtime-mode is batch



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25126) when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 'exactly-once',kafka conncetor will commit fail

2021-12-01 Thread Fabian Paul (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451611#comment-17451611
 ] 

Fabian Paul commented on FLINK-25126:
-

I'll take that we fix it before releasing 1.14.1

> when SET 'execution.runtime-mode' = 'batch' and 'sink.delivery-guarantee' = 
> 'exactly-once',kafka conncetor will commit fail
> ---
>
> Key: FLINK-25126
> URL: https://issues.apache.org/jira/browse/FLINK-25126
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>Reporter: eric yu
>Priority: Blocker
> Fix For: 1.14.1
>
>
> flinksql task submitted by sql client will failed,
> this is the sql :
> SET 'execution.runtime-mode' = 'batch';
>  CREATE TABLE ka15 (
>      name String,
>      cnt bigint
>  ) WITH (
>    'connector' = 'kafka',
>    'topic' = 'shifang',
>   'properties.bootstrap.servers' = 'flinkx1:9092',
>   'properties.transaction.timeout.ms' = '80',
>   'properties.max.block.ms' = '30',
>    'value.format' = 'json',
>    'sink.parallelism' = '2',
>    'sink.delivery-guarantee' = 'exactly-once',
>    'sink.transactional-id-prefix' = 'dtstack');
>  
> insert into ka15 SELECT
>   name,
>   cnt
> FROM
>   (VALUES ('Bob',100), ('Alice',100), ('Greg',100), ('Bob',100)) AS 
> NameTable(name,cnt);
>  
> this is the error:
> Caused by: java.lang.IllegalStateException
> at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
> at 
> org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.setTransactionId(FlinkKafkaInternalProducer.java:164)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:144)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:76)
> at java.util.Optional.orElseGet(Optional.java:267)
> at 
> org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:76)
> ... 14 more
>  
>  
> i found the reason why  kafka commit failed, when downstream operator 
> CommitterOperator was commiting transaction, the upstream  operator 
> SinkOperator has closed , it will abort the transaction which  is committing 
> by CommitterOperator, this is occurs when execution.runtime-mode is batch



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] fapaul merged pull request #17957: [FLINK-15493][test] Add retry rule for all tests based on KafkaTestBase

2021-12-01 Thread GitBox


fapaul merged pull request #17957:
URL: https://github.com/apache/flink/pull/17957


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr closed pull request #17931: [FLINK-25075] Refactor PlannerExpressionParser, removing the reflection to instantiate it

2021-12-01 Thread GitBox


twalthr closed pull request #17931:
URL: https://github.com/apache/flink/pull/17931


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16609: [FLINK-23324][connector-jdbc] fix Postgres case-insensitive.

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #16609:
URL: https://github.com/apache/flink/pull/16609#issuecomment-887558020


   
   ## CI report:
   
   * 906f6d18caf7662f1563ecd952bd2eaefd06 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27329)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25075) Remove reflection to instantiate PlannerExpressionParser

2021-12-01 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-25075.

Fix Version/s: 1.15.0
   Resolution: Fixed

Fixed in master: 8c40849c5249d96532305d6b88302ee863371541

> Remove reflection to instantiate PlannerExpressionParser
> 
>
> Key: FLINK-25075
> URL: https://issues.apache.org/jira/browse/FLINK-25075
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Francesco Guardiani
>Assignee: Francesco Guardiani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> This reflection leaks the planner module classpath and can cause issues when 
> isolating the classpath



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17929: [FLINK-21467][docs] Clarify javadocs of Bounded(One/Multi)Input interfaces

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17929:
URL: https://github.com/apache/flink/pull/17929#issuecomment-979955307


   
   ## CI report:
   
   * f0ea64c5b47a20dbcd8e57865a8041d8ef2549ad Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27131)
 
   * 2b22ab59c52bde55b8f8e6b9bf51baf083781d6f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17929: [FLINK-21467][docs] Clarify javadocs of Bounded(One/Multi)Input interfaces

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17929:
URL: https://github.com/apache/flink/pull/17929#issuecomment-979955307


   
   ## CI report:
   
   * f0ea64c5b47a20dbcd8e57865a8041d8ef2549ad Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27131)
 
   * 2b22ab59c52bde55b8f8e6b9bf51baf083781d6f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27337)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] zhipeng93 commented on pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-12-01 Thread GitBox


zhipeng93 commented on pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#issuecomment-983413980


   Hi@lindong28 , thanks for the comments. I have addressed your comments in 
the latest commit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #17911: [FLINK-24902][table-planner] Port integer to boolean and boolean to numeric to CastRule

2021-12-01 Thread GitBox


twalthr commented on a change in pull request #17911:
URL: https://github.com/apache/flink/pull/17911#discussion_r759964859



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/IntegerNumericToBooleanCastRule.java
##
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.functions.casting;
+
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+
+/** {@link LogicalTypeFamily#INTEGER_NUMERIC} to {@link 
LogicalTypeRoot#BOOLEAN} conversions. */
+public class IntegerNumericToBooleanCastRule

Review comment:
   Can be package private.

##
File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
##
@@ -1096,29 +1096,6 @@ object ScalarOperatorGens {
   operandTerm => s"$operandTerm.toBytes($serTerm)"
 }
 
-  // Note: SQL2003 $6.12 - casting is not allowed between boolean and 
numeric types.

Review comment:
   I just wanted to raise attention. But I think it is not valid anymore. 
Let's not preserve it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-24506) checkpoint directory is not configurable through the Flink configuration passed into the StreamExecutionEnvironment

2021-12-01 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias reassigned FLINK-24506:


Assignee: Matthias

> checkpoint directory is not configurable through the Flink configuration 
> passed into the StreamExecutionEnvironment
> ---
>
> Key: FLINK-24506
> URL: https://issues.apache.org/jira/browse/FLINK-24506
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration, Runtime / State Backends
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Matthias
>Assignee: Matthias
>Priority: Major
>
> FLINK-19463 introduced the separation of {{StateBackend}} and 
> {{{}CheckpointStorage{}}}. Before that, both were included in the same 
> interface implementation 
> [AbstractFileStateBackend|https://github.com/apache/flink/blob/0a76daba0a428a322f0273d7dc6a70966f62bf26/flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/AbstractFileStateBackend.java].
>  {{FsStateBackend}} was used as a default implementation pre-1.13.
> pre-{{{}1.13{}}} initialized the checkpoint directory when instantiating the 
> state backend (see 
> [FsStateBackendFactory|https://github.com/apache/flink/blob/release-1.12/flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/FsStateBackendFactory.java#L46]).
>  Starting from {{1.13}} loading the {{CheckpointStorage}} is done by the 
> {{CheckpointStorageLoader.load}} method that is called in various places:
>  * Savepoint Disposal (through {{{}Checkpoints.loadCheckpointStorage{}}}) 
> where it only relies on the configuration passed in by the cluster 
> configuration (no application checkpoint storage is passed)
>  * {{SchedulerBase}} initialization (through DefaultExecutionGraphBuilder) 
> where it’s based on the cluster’s configuration but also the application 
> configuration (i.e. the {{{}JobGraph{}}}’s setting) that would be considered 
> if {{CheckpointConfig#configure}} would have the checkpoint storage included
>  * {{StreamTask}} on the {{{}TaskManager{}}}’s side where it’s based on the 
> configuration passed in by the {{JobVertex}} for the application’s 
> {{CheckpointStorage}} and the {{{}TaskManager{}}}’s configuration (coming 
> from the session cluster) for the fallback {{CheckpointStorage}}
> The issue is that we don't set the checkpoint directory in the 
> {{{}CheckpointConfig{}}}. Hence, it's not going to get picked up as a 
> job-related property. Flink always uses the fallback provided by the session 
> cluster configuration.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25115) Why Flink Sink operator metric numRecordsOut and numRecordsOutPerSecond always equal 0

2021-12-01 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451621#comment-17451621
 ] 

Chesnay Schepler commented on FLINK-25115:
--

Note that since 1.14 various sources/sinks to expose numRecordsIn/-Out metrics.

> Why Flink Sink operator metric numRecordsOut and numRecordsOutPerSecond 
> always equal 0
> --
>
> Key: FLINK-25115
> URL: https://issues.apache.org/jira/browse/FLINK-25115
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.2
>Reporter: hjw
>Priority: Major
> Attachments: image-2021-11-30-23-56-26-222.png
>
>
> I submit a Flink-sql job .I found that the numRecordsOut  and 
> numRecordsOutPerSecond  indicators are always 0.
>  
> !image-2021-11-30-23-56-26-222.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25127) Reuse a single Collection in GlobalWindows#assignWindows

2021-12-01 Thread bx123 (Jira)
bx123 created FLINK-25127:
-

 Summary: Reuse a single Collection in GlobalWindows#assignWindows
 Key: FLINK-25127
 URL: https://issues.apache.org/jira/browse/FLINK-25127
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream
Reporter: bx123


When we use GlobalWindow, the window assigner will create a new collection for 
each new stream record. This is not necessary, as the collection is immutable 
and GlobalWindow is singleton and stateless. We can add a static collection 
field in GlobalWindows and return it, to take pressure off the GC.

 

private static final Collection GlobalWindowCollection = 
Collections.singletonList(GlobalWindow.get());

 

@Override
public Collection assignWindows(
Object element, long timestamp, WindowAssignerContext context) {
     return GlobalWindowCollection;
}

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] XComp opened a new pull request #17972: [FLINK-24506][config] Adds checkpoint directory to CheckpointConfig

2021-12-01 Thread GitBox


XComp opened a new pull request #17972:
URL: https://github.com/apache/flink/pull/17972


   This enables the user to pass in the checkpoint directory through the Flink
   configuration again. This fixes an issue that was introduced by 
[FLINK-19463](https://issues.apache.org/jira/browse/FLINK-19463).
   
   ## What is the purpose of the change
   
   This fixes an issue where the checkpoint directory couldn't be set anymore 
through the Flink configuration. See a more detailed description in the ticket 
[FLINK-24506](https://issues.apache.org/jira/browse/FLINK-24506)
   
   ## Brief change log
   
   * Adds the `state.checkpoints.dir` to `CheckpointConfig.configure`
   
   ## Verifying this change
   
   * A test was added to the corresponding 
`CheckpointConfigFromConfigurationTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: yes
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24506) checkpoint directory is not configurable through the Flink configuration passed into the StreamExecutionEnvironment

2021-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24506:
---
Labels: pull-request-available  (was: )

> checkpoint directory is not configurable through the Flink configuration 
> passed into the StreamExecutionEnvironment
> ---
>
> Key: FLINK-24506
> URL: https://issues.apache.org/jira/browse/FLINK-24506
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration, Runtime / State Backends
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Matthias
>Assignee: Matthias
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-19463 introduced the separation of {{StateBackend}} and 
> {{{}CheckpointStorage{}}}. Before that, both were included in the same 
> interface implementation 
> [AbstractFileStateBackend|https://github.com/apache/flink/blob/0a76daba0a428a322f0273d7dc6a70966f62bf26/flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/AbstractFileStateBackend.java].
>  {{FsStateBackend}} was used as a default implementation pre-1.13.
> pre-{{{}1.13{}}} initialized the checkpoint directory when instantiating the 
> state backend (see 
> [FsStateBackendFactory|https://github.com/apache/flink/blob/release-1.12/flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/FsStateBackendFactory.java#L46]).
>  Starting from {{1.13}} loading the {{CheckpointStorage}} is done by the 
> {{CheckpointStorageLoader.load}} method that is called in various places:
>  * Savepoint Disposal (through {{{}Checkpoints.loadCheckpointStorage{}}}) 
> where it only relies on the configuration passed in by the cluster 
> configuration (no application checkpoint storage is passed)
>  * {{SchedulerBase}} initialization (through DefaultExecutionGraphBuilder) 
> where it’s based on the cluster’s configuration but also the application 
> configuration (i.e. the {{{}JobGraph{}}}’s setting) that would be considered 
> if {{CheckpointConfig#configure}} would have the checkpoint storage included
>  * {{StreamTask}} on the {{{}TaskManager{}}}’s side where it’s based on the 
> configuration passed in by the {{JobVertex}} for the application’s 
> {{CheckpointStorage}} and the {{{}TaskManager{}}}’s configuration (coming 
> from the session cluster) for the fallback {{CheckpointStorage}}
> The issue is that we don't set the checkpoint directory in the 
> {{{}CheckpointConfig{}}}. Hence, it's not going to get picked up as a 
> job-related property. Flink always uses the fallback provided by the session 
> cluster configuration.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24506) checkpoint directory is not configurable through the Flink configuration passed into the StreamExecutionEnvironment

2021-12-01 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451622#comment-17451622
 ] 

Matthias commented on FLINK-24506:
--

Both, {{StreamPlanEnvironment}} and {{StreamContextEnvironment}} are intialized 
using the Flink configuration (which includes the checkpoint directory). Both 
classes derive from {{{}StreamExecutionEnvironment{}}}. The 
{{StreamExecutionEnvironment}} initializes the {{CheckpointConfig}} (see 
[StreamExecutionEnvironment:975|https://github.com/apache/flink/blob/b4c385e41832f16e39d5cbe4fb69ead9bbe077b2/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java#L975]).
 But it doesn’t set the checkpoint directory in 
[CheckpointConfig#configure|https://github.com/apache/flink/blob/cd01d4c02793d1b29618093f730b3bc521152b62/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/CheckpointConfig.java#L753].
 This was previously set when initializing the {{StateBackend}} in 
[StreamExecutionEnvironment#configure|https://github.com/apache/flink/blob/b4c385e41832f16e39d5cbe4fb69ead9bbe077b2/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java#L919]
 which calls 
[StateBackendLoader#loadStateBackendFromConfig|https://github.com/apache/flink/blob/641c31e92fd8ff3702d2ac3510a63b0653802a2e/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java#L146].

{{FsStateBackend}} was initialized using the checkpoint directory. 
{{HashMapStateBackend}} does not include this pointer anymore but relies on the 
{{CheckpointStorage}} instead that is loaded through 
[CheckpointStorageLoader.load|https://github.com/apache/flink/blob/658fac3736b73adf54b629242ede91313947e7e1/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLoader.java#L158]
 which is called when creating the ExecutionGraph in 
[DefaultExecutionGraphBuilder:269|https://github.com/apache/flink/blob/eba8f574c550123004ed4f557cef28ff557cd88e/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/DefaultExecutionGraphBuilder.java#L269].
 The {{CheckpointStorage}} is either loaded from the JobManager configuration 
(i.e. the Session cluster’s configuration) or from the application (i.e. the 
{{{}JobGraph{}}}). But the {{JobGraph}} does not have this setting set due to 
it not being written 
[CheckpointConfig#configure|https://github.com/apache/flink/blob/cd01d4c02793d1b29618093f730b3bc521152b62/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/CheckpointConfig.java#L753]
 as written earlier already.

The {{CheckpointStorage}} is loaded in three locations:
 * Savepoint Disposal (through 
[Checkpoints.loadCheckpointStorage|https://github.com/apache/flink/blob/4597d5557c640e0ef5a526cbb6d46686be5dd813/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/Checkpoints.java#L351])
 where it only relies on the configuration passed in by the cluster 
configuration (no application checkpoint storage is passed)

 * Scheduler initialization (through 
[DefaultExecutionGraphBuilder|https://github.com/apache/flink/blob/f7bedb0603c33cb4e25c62c9899edb709b264371/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/DefaultExecutionGraphBuilder.java#L268])
 where it’s based on the cluster’s configuration but also the application 
configuration (i.e. the JobGraph’s setting) that would be considered if 
[CheckpointConfig#configure|https://github.com/apache/flink/blob/cd01d4c02793d1b29618093f730b3bc521152b62/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/CheckpointConfig.java#L753]
 would have the checkpoint storage included

 * 
[StreamTask|https://github.com/apache/flink/blob/79a801a7bf669813d88784fd642d724d6dab69f4/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java#L1505]
 on the TaskManager’s side where it’s based on the configuration passed in by 
the {{JobVertex}} for the application’s {{CheckpointStorage}} and the 
TaskManager’s configuration (coming from the session cluster) for the fallback 
{{CheckpointStorage}}

For the latter two, we could solve the issue of not having the checkpoint 
directory being loaded from the job spec by considering the option in 
[CheckpointConfig#configure|https://github.com/apache/flink/blob/cd01d4c02793d1b29618093f730b3bc521152b62/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/CheckpointConfig.java#L753]

The Savepoint Disposal is save even with this change because it relies on an 
external pointer for the savepoint and does not use the checkpoint directory at 
all.

> checkpoint directory is not configurable through the Flink configuration 
> passed into the StreamExecutionEnvironment
> 

[GitHub] [flink] flinkbot commented on pull request #17972: [FLINK-24506][config] Adds checkpoint directory to CheckpointConfig

2021-12-01 Thread GitBox


flinkbot commented on pull request #17972:
URL: https://github.com/apache/flink/pull/17972#issuecomment-983423604


   
   ## CI report:
   
   * d8939c8a7dfb4e8bc9a338ab54dab8ab6b72 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17972: [FLINK-24506][config] Adds checkpoint directory to CheckpointConfig

2021-12-01 Thread GitBox


flinkbot commented on pull request #17972:
URL: https://github.com/apache/flink/pull/17972#issuecomment-983423826


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit d8939c8a7dfb4e8bc9a338ab54dab8ab6b72 (Wed Dec 01 
08:55:14 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] akalash commented on a change in pull request #17946: [FLINK-24919][runtime] Getting vertex only under synchronization in C…

2021-12-01 Thread GitBox


akalash commented on a change in pull request #17946:
URL: https://github.com/apache/flink/pull/17946#discussion_r759973866



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ExecutionAttemptMappingProvider.java
##
@@ -56,14 +56,16 @@ protected boolean removeEldestEntry(
 }
 
 public Optional getVertex(ExecutionAttemptID id) {
-if (!cachedTasksById.containsKey(id)) {
-cachedTasksById.putAll(getCurrentAttemptMappings());
+synchronized (cachedTasksById) {

Review comment:
   Unfortunately, it is not a fully correct solution since  
`cachedTasksById.containsKey(id)` and `cachedTasksById.get(id) == null` can be 
both true at the same time so you can not replace `containsKey` by null check. 
Secondly, originally we have `LinkedHashMap` and remove the oldest entry if the 
size is exceeded while for concurrent implementation it will be more tricky.
   In fact, we can resolve all problems but as I said before it will be more 
complicated (ConcurrentHashMap + ConcurrentLinkedQueue + NullObject). 
   Right now I don't really understand how critical this code is and how often 
it is executed. But maybe I will try my idea and we will see how complicated it 
looks like




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol merged pull request #17963: [FLINK-25112][tests] Remove cache-ttl for Java e2e tests

2021-12-01 Thread GitBox


zentol merged pull request #17963:
URL: https://github.com/apache/flink/pull/17963


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul opened a new pull request #17973: [FLINK-15493][test] Add retry rule for all tests based on KafkaTestBase

2021-12-01 Thread GitBox


fapaul opened a new pull request #17973:
URL: https://github.com/apache/flink/pull/17973


   Unchanged backport of #17957 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22621) HBaseConnectorITCase.testTableSourceSinkWithDDL unstable on azure

2021-12-01 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-22621:
--
Fix Version/s: (was: 1.15.0)
   (was: 1.14.1)
   (was: 1.13.4)

> HBaseConnectorITCase.testTableSourceSinkWithDDL unstable on azure
> -
>
> Key: FLINK-22621
> URL: https://issues.apache.org/jira/browse/FLINK-22621
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Roman Khachatryan
>Assignee: Jing Ge
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17763&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=03dca39c-73e8-5aaf-601d-328ae5c35f20&l=12317]
>  
> {code:java}
> 2021-05-10T00:19:41.1703846Z May 10 00:19:41 
> testTableSourceSinkWithDDL[planner = BLINK_PLANNER, legacy = 
> false](org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time elapsed: 
> 2.907 sec  <<< FAILURE!
> 2021-05-10T00:19:41.1711710Z May 10 00:19:41 java.lang.AssertionError: 
> expected:<[+I[1, 10, Hello-1, 100, 1.01, false, Welt-1, 2019-08-18T19:00, 
> 2019-08-18, 19:00, 12345678.0001], +I[2, 20, Hello-2, 200, 2.02, true, 
> Welt-2, 2019-08-18T19:01, 2019-08-18, 19:01, 12345678.0002], +I[3, 30, 
> Hello-3, 300, 3.03, false, Welt-3, 2019-08-18T19:02, 2019-08-18, 19:02, 
> 12345678.0003], +I[4, 40, null, 400, 4.04, true, Welt-4, 2019-08-18T19:03, 
> 2019-08-18, 19:03, 12345678.0004], +I[5, 50, Hello-5, 500, 5.05, false, 
> Welt-5, 2019-08-19T19:10, 2019-08-19, 19:10, 12345678.0005], +I[6, 60, 
> Hello-6, 600, 6.06, true, Welt-6, 2019-08-19T19:20, 2019-08-19, 19:20, 
> 12345678.0006], +I[7, 70, Hello-7, 700, 7.07, false, Welt-7, 
> 2019-08-19T19:30, 201 9-08-19, 19:30, 12345678.0007], +I[8, 80, null, 800, 
> 8.08, true, Welt-8, 2019-08-19T19:40, 2019-08-19, 19:40, 12345678.0008]]> but 
> was:<[+I[1, 10,  Hello-1, 100, 1.01, false, Welt-1, 2019-08-18T19:00, 
> 2019-08-18, 19:00, 12345678.0001], +I[2, 20, Hello-2, 200, 2.02, true, 
> Welt-2, 2019-08-18T19 :01, 2019-08-18, 19:01, 12345678.0002], +I[3, 30, 
> Hello-3, 300, 3.03, false, Welt-3, 2019-08-18T19:02, 2019-08-18, 19:02, 
> 12345678.0003]]>
> 2021-05-10T00:19:41.1716769Z May 10 00:19:41at 
> org.junit.Assert.fail(Assert.java:88)
> 2021-05-10T00:19:41.1717997Z May 10 00:19:41at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2021-05-10T00:19:41.1718744Z May 10 00:19:41at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2021-05-10T00:19:41.1719472Z May 10 00:19:41at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2021-05-10T00:19:41.1720270Z May 10 00:19:41at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnecto
>  rITCase.java:506)
>  {code}
> Probably the same or similar to FLINK-19615



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17972: [FLINK-24506][config] Adds checkpoint directory to CheckpointConfig

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17972:
URL: https://github.com/apache/flink/pull/17972#issuecomment-983423604


   
   ## CI report:
   
   * d8939c8a7dfb4e8bc9a338ab54dab8ab6b72 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27339)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25112) Remove TTL from e2e cache

2021-12-01 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-25112.

Resolution: Fixed

master: 5b99a02a38e3f01f8ffe3504c6b0918ff3f65d6c
1.14: c8c60e1e81328a237d02fc4fa7fc8dfa5ff42710 
1.13: 00900240591eb01c1bc3b8dcf38b2421ec74caca 

> Remove TTL from e2e cache
> -
>
> Key: FLINK-25112
> URL: https://issues.apache.org/jira/browse/FLINK-25112
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Test Infrastructure
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.14.1, 1.13.4
>
>
> The e2e cache for java tests is currently configured to ignore entries if 
> they are 3 months old.
> However, because existing caches are immutable in Azure, this means that if 
> something was put into the cache, and remained valid for 3 months (e.g., 
> because no further java e2e test was added), then this will result in every 
> run ignore the cached contents.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] akalash commented on a change in pull request #17946: [FLINK-24919][runtime] Getting vertex only under synchronization in C…

2021-12-01 Thread GitBox


akalash commented on a change in pull request #17946:
URL: https://github.com/apache/flink/pull/17946#discussion_r759977766



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ExecutionAttemptMappingProvider.java
##
@@ -56,14 +56,16 @@ protected boolean removeEldestEntry(
 }
 
 public Optional getVertex(ExecutionAttemptID id) {
-if (!cachedTasksById.containsKey(id)) {
-cachedTasksById.putAll(getCurrentAttemptMappings());
+synchronized (cachedTasksById) {

Review comment:
   Oh, or do you mean that we can drop all mapping when we have even one 
miss?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25081) When chaining an operator of a side output stream, the num records sent displayed on the dashboard is incorrect

2021-12-01 Thread Guo Weijie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451634#comment-17451634
 ] 

Guo Weijie commented on FLINK-25081:


It seems that the numRecordsOut of streamTask only uses the value corresponding 
to the end operator of operator chain. Do we also need calculate the non-end 
operators of operator chain? And numBytesOut is counted when the 
resultPartition is written out data, so the value is correct as your picture. 
Maybe we can also use this approach for numRecordsOut, What's your opinion 
about this?

> When chaining an operator of a side output stream, the num records sent 
> displayed on the dashboard is incorrect
> ---
>
> Key: FLINK-25081
> URL: https://issues.apache.org/jira/browse/FLINK-25081
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.14.0
>Reporter: Lijie Wang
>Priority: Major
> Attachments: image-2021-11-26-20-32-08-443.png
>
>
> As show in the following figure, "Map" is an operator of a side output 
> stream, the num records sent of first vertex is 0.
> !image-2021-11-26-20-32-08-443.png|width=750,height=253!
>  
> The job code is as follows:
> {code:java}
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> SingleOutputStreamOperator dataStream =
> env.addSource(new 
> DataGeneratorSource<>(RandomGenerator.longGenerator(1, 1000)))
> .returns(Long.class)
> .setParallelism(10)
> .slotSharingGroup("group1");
> DataStream sideOutput = dataStream.getSideOutput(new 
> OutputTag("10") {});
> sideOutput.map(num -> num).setParallelism(10).slotSharingGroup("group1");
> dataStream.addSink(new 
> DiscardingSink<>()).setParallelism(10).slotSharingGroup("group2");
> env.execute("WordCount"); {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17973: [FLINK-15493][test] Add retry rule for all tests based on KafkaTestBase

2021-12-01 Thread GitBox


flinkbot commented on pull request #17973:
URL: https://github.com/apache/flink/pull/17973#issuecomment-983427891






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] akalash commented on a change in pull request #17929: [FLINK-21467][docs] Clarify javadocs of Bounded(One/Multi)Input interfaces

2021-12-01 Thread GitBox


akalash commented on a change in pull request #17929:
URL: https://github.com/apache/flink/pull/17929#discussion_r759980427



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/BoundedMultiInput.java
##
@@ -19,13 +19,25 @@
 
 import org.apache.flink.annotation.PublicEvolving;
 
-/** Interface for the multi-input operators that can process EndOfInput event. 
*/
+/**
+ * Interface for multi-input operators that need to be notified about the 
logical/semantical end of
+ * input.
+ *
+ * @see BoundedOneInput
+ */
 @PublicEvolving
 public interface BoundedMultiInput {
 
 /**
- * It is notified that no more data will arrive on the input identified by 
the {@code inputId}.
- * The {@code inputId} is numbered starting from 1, and `1` indicates the 
first input.
+ * It is notified that no more data will arrive from the input identified 
by the {@code
+ * inputId}. The {@code inputId} is numbered starting from 1, and `1` 
indicates the first input.
+ *
+ * WARNING: It is not safe to use this method to commit any 
transactions or other side
+ * effects! You can use this method to e.g. flush data buffered for the 
given input or implement
+ * an ordered reading from multiple inputs via {@link InputSelectable}.
+ *
+ * NOTE: Classes should not implement both {@link 
BoundedOneInput} and {@link
+ * BoundedMultiInput} at the same time!

Review comment:
   > NOTE: Classes should not implement both {@link 
BoundedOneInput} and {@link
   >  * BoundedMultiInput} at the same time!
   
   Now this one on the wrong place. I think it should be comment for class the 
same as for `BoundedOneInput`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17973: [FLINK-15493][test] Add retry rule for all tests based on KafkaTestBase

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17973:
URL: https://github.com/apache/flink/pull/17973#issuecomment-983428028


   
   ## CI report:
   
   * c793a6fe01bdf5634fdd28f6fd9c3c1f0094d2ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27343)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] matriv commented on a change in pull request #17911: [FLINK-24902][table-planner] Port integer to boolean and boolean to numeric to CastRule

2021-12-01 Thread GitBox


matriv commented on a change in pull request #17911:
URL: https://github.com/apache/flink/pull/17911#discussion_r759987143



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/BooleanToNumericCastRule.java
##
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.functions.casting;
+
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+
+import static 
org.apache.flink.table.planner.codegen.CodeGenUtils.primitiveLiteralForType;
+import static 
org.apache.flink.table.planner.codegen.calls.BuiltInMethods.DECIMAL_ZERO;
+import static 
org.apache.flink.table.planner.codegen.calls.BuiltInMethods.INTEGRAL_TO_DECIMAL;
+import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.staticCall;
+import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.ternaryOperator;
+
+/** {@link LogicalTypeRoot#BOOLEAN} to {@link LogicalTypeFamily#NUMERIC} 
conversions. */
+public class BooleanToNumericCastRule
+extends AbstractExpressionCodeGeneratorCastRule {
+
+static final BooleanToNumericCastRule INSTANCE = new 
BooleanToNumericCastRule();
+
+private BooleanToNumericCastRule() {
+super(
+CastRulePredicate.builder()
+.input(LogicalTypeRoot.BOOLEAN)
+.target(LogicalTypeFamily.NUMERIC)
+.build());
+}
+
+@Override
+public String generateExpression(
+CodeGeneratorCastRule.Context context,
+String inputTerm,
+LogicalType inputLogicalType,
+LogicalType targetLogicalType) {
+return ternaryOperator(
+inputTerm, trueValue(targetLogicalType), 
falseValue(targetLogicalType));
+}
+
+private String trueValue(LogicalType target) {
+switch (target.getTypeRoot()) {
+case DECIMAL:
+DecimalType decimalType = (DecimalType) target;
+return staticCall(
+INTEGRAL_TO_DECIMAL(),
+1,
+decimalType.getPrecision(),
+decimalType.getScale());
+case TINYINT:
+return primitiveLiteralForType((byte) 1);
+case SMALLINT:
+return primitiveLiteralForType((short) 1);
+case INTEGER:
+return primitiveLiteralForType(1);
+case BIGINT:
+return primitiveLiteralForType(1L);
+case FLOAT:
+return primitiveLiteralForType(1f);
+case DOUBLE:
+return primitiveLiteralForType(1d);
+}
+throw new IllegalArgumentException("This is a bug. Please file an 
issue.");

Review comment:
   nit, don't have to change: It's more of personal taste to use default 
statement to throw the exception, also below, but it's up to you.

##
File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
##
@@ -1096,29 +1096,6 @@ object ScalarOperatorGens {
   operandTerm => s"$operandTerm.toBytes($serTerm)"
 }
 
-  // Note: SQL2003 $6.12 - casting is not allowed between boolean and 
numeric types.

Review comment:
   I would actually move it maybe to `LogicalTypeCasts`? So that we know 
that we chose to implement them instead of the SQL spec?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25128) Introduce flink-table-planner-loader

2021-12-01 Thread Francesco Guardiani (Jira)
Francesco Guardiani created FLINK-25128:
---

 Summary: Introduce flink-table-planner-loader
 Key: FLINK-25128
 URL: https://issues.apache.org/jira/browse/FLINK-25128
 Project: Flink
  Issue Type: Sub-task
Reporter: Francesco Guardiani


For more details, see 
https://docs.google.com/document/d/12yDUCnvcwU2mODBKTHQ1xhfOq1ujYUrXltiN_rbhT34/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25129) Update docs and examples to use flink-table-planner-loader instead of flink-table-planner

2021-12-01 Thread Francesco Guardiani (Jira)
Francesco Guardiani created FLINK-25129:
---

 Summary: Update docs and examples to use 
flink-table-planner-loader instead of flink-table-planner
 Key: FLINK-25129
 URL: https://issues.apache.org/jira/browse/FLINK-25129
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Examples, Table SQL / API
Reporter: Francesco Guardiani


For more details 
https://docs.google.com/document/d/12yDUCnvcwU2mODBKTHQ1xhfOq1ujYUrXltiN_rbhT34/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25128) Introduce flink-table-planner-loader

2021-12-01 Thread Francesco Guardiani (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Guardiani updated FLINK-25128:

Component/s: Table SQL / Planner

> Introduce flink-table-planner-loader
> 
>
> Key: FLINK-25128
> URL: https://issues.apache.org/jira/browse/FLINK-25128
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Francesco Guardiani
>Priority: Major
>
> For more details, see 
> https://docs.google.com/document/d/12yDUCnvcwU2mODBKTHQ1xhfOq1ujYUrXltiN_rbhT34/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25114) Remove flink-scala dependency from flink-table-runtime

2021-12-01 Thread Francesco Guardiani (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Guardiani updated FLINK-25114:

Component/s: Table SQL / Runtime

> Remove flink-scala dependency from flink-table-runtime
> --
>
> Key: FLINK-25114
> URL: https://issues.apache.org/jira/browse/FLINK-25114
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: Francesco Guardiani
>Assignee: Francesco Guardiani
>Priority: Major
>
> flink-scala should not be necessary anymore to flink-table-runtime. We should 
> try to remove it in order to help with the parent task 
> [FLINK-24427|https://issues.apache.org/jira/browse/FLINK-24427]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25130) Update flink-table-uber to ship flink-table-planner-loader instead of flink-table-planner

2021-12-01 Thread Francesco Guardiani (Jira)
Francesco Guardiani created FLINK-25130:
---

 Summary: Update flink-table-uber to ship 
flink-table-planner-loader instead of flink-table-planner
 Key: FLINK-25130
 URL: https://issues.apache.org/jira/browse/FLINK-25130
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Francesco Guardiani


This should also be tested by the sql client.

This change should be tested then by our e2e tests, specifically 
flink-end-to-end-tests/flink-batch-sql-test.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25130) Update flink-table-uber to ship flink-table-planner-loader instead of flink-table-planner

2021-12-01 Thread Francesco Guardiani (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Guardiani updated FLINK-25130:

Description: 
For more details, see the parent task.

This change should be tested by our e2e tests, specifically 
flink-end-to-end-tests/flink-batch-sql-test.

  was:
This should also be tested by the sql client.

This change should be tested then by our e2e tests, specifically 
flink-end-to-end-tests/flink-batch-sql-test.


> Update flink-table-uber to ship flink-table-planner-loader instead of 
> flink-table-planner
> -
>
> Key: FLINK-25130
> URL: https://issues.apache.org/jira/browse/FLINK-25130
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Francesco Guardiani
>Priority: Major
>
> For more details, see the parent task.
> This change should be tested by our e2e tests, specifically 
> flink-end-to-end-tests/flink-batch-sql-test.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25131) Update sql client to ship flink-table-planner-loader instead of flink-table-planner

2021-12-01 Thread Francesco Guardiani (Jira)
Francesco Guardiani created FLINK-25131:
---

 Summary: Update sql client to ship flink-table-planner-loader 
instead of flink-table-planner
 Key: FLINK-25131
 URL: https://issues.apache.org/jira/browse/FLINK-25131
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Client
Reporter: Francesco Guardiani


See the parent task for more details.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zentol commented on a change in pull request #17972: [FLINK-24506][config] Adds checkpoint directory to CheckpointConfig

2021-12-01 Thread GitBox


zentol commented on a change in pull request #17972:
URL: https://github.com/apache/flink/pull/17972#discussion_r76634



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/environment/CheckpointConfigFromConfigurationTest.java
##
@@ -20,8 +20,16 @@
 
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.runtime.state.CheckpointStorage;
+import org.apache.flink.runtime.state.storage.FileSystemCheckpointStorage;
 import org.apache.flink.streaming.api.CheckpointingMode;
 
+import org.apache.flink.shaded.guava30.com.google.common.base.Preconditions;

Review comment:
   use flink preconditions




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #17972: [FLINK-24506][config] Adds checkpoint directory to CheckpointConfig

2021-12-01 Thread GitBox


zentol commented on a change in pull request #17972:
URL: https://github.com/apache/flink/pull/17972#discussion_r760001055



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/environment/CheckpointConfigFromConfigurationTest.java
##
@@ -90,7 +97,81 @@
 .whenSetFromFile("execution.checkpointing.unaligned", 
"true")
 
.viaSetter(CheckpointConfig::enableUnalignedCheckpoints)
 
.getterVia(CheckpointConfig::isUnalignedCheckpointsEnabled)
-.nonDefaultValue(true));
+.nonDefaultValue(true),
+TestSpec.testValue(
+(CheckpointStorage)
+new FileSystemCheckpointStorage(
+
"file:///path/to/checkpoint/dir"))
+.whenSetFromFile("state.checkpoints.dir", 
"file:///path/to/checkpoint/dir")
+.viaSetter(CheckpointConfig::setCheckpointStorage)
+.getterVia(CheckpointConfig::getCheckpointStorage)
+.nonDefaultValue(
+new 
FileSystemCheckpointStorage("file:///path/to/checkpoint/dir"))
+
.customMatcher(FileSystemCheckpointStorageMatcher::new));
+}
+
+/**
+ * {@code FileSystemCheckpointStorageMatcher} verifies that the set {@link 
CheckpointStorage} is
+ * of type {@link FileSystemCheckpointStorage} pointing to the same 
filesystem path.
+ */
+private static class FileSystemCheckpointStorageMatcher

Review comment:
   implement an assertj matcher instead




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul opened a new pull request #17974: [FLINK-15493][test] Add retry rule for all tests based on KafkaTestBase

2021-12-01 Thread GitBox


fapaul opened a new pull request #17974:
URL: https://github.com/apache/flink/pull/17974


   Unchanged backport of #17973 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #17972: [FLINK-24506][config] Adds checkpoint directory to CheckpointConfig

2021-12-01 Thread GitBox


zentol commented on a change in pull request #17972:
URL: https://github.com/apache/flink/pull/17972#discussion_r760003494



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/environment/CheckpointConfigFromConfigurationTest.java
##
@@ -90,7 +97,81 @@
 .whenSetFromFile("execution.checkpointing.unaligned", 
"true")
 
.viaSetter(CheckpointConfig::enableUnalignedCheckpoints)
 
.getterVia(CheckpointConfig::isUnalignedCheckpointsEnabled)
-.nonDefaultValue(true));
+.nonDefaultValue(true),
+TestSpec.testValue(
+(CheckpointStorage)
+new FileSystemCheckpointStorage(
+
"file:///path/to/checkpoint/dir"))
+.whenSetFromFile("state.checkpoints.dir", 
"file:///path/to/checkpoint/dir")
+.viaSetter(CheckpointConfig::setCheckpointStorage)
+.getterVia(CheckpointConfig::getCheckpointStorage)
+.nonDefaultValue(
+new 
FileSystemCheckpointStorage("file:///path/to/checkpoint/dir"))

Review comment:
   shouldn't the paths be different then the configured path? As is, if the 
config doesn't forward it then surely the assertion would still succeed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17932: [FLINK-25079][table-common] Add some initial assertj assertions for table data and types apis

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17932:
URL: https://github.com/apache/flink/pull/17932#issuecomment-980084610


   
   ## CI report:
   
   * b710cc7d87487f5a91cc9ff2aa71f64e7ba463ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27289)
 
   * 03951aea9d48a1955f459250208bf498b014342a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27336)
 
   * aa44565b71a1180508e63152555e342ff77e760d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27344)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17974: [FLINK-15493][test] Add retry rule for all tests based on KafkaTestBase

2021-12-01 Thread GitBox


flinkbot commented on pull request #17974:
URL: https://github.com/apache/flink/pull/17974#issuecomment-983456661


   
   ## CI report:
   
   * da9b379479034541309e880b405d6a7065e55f2d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17974: [FLINK-15493][test] Add retry rule for all tests based on KafkaTestBase

2021-12-01 Thread GitBox


flinkbot commented on pull request #17974:
URL: https://github.com/apache/flink/pull/17974#issuecomment-983456811


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit da9b379479034541309e880b405d6a7065e55f2d (Wed Dec 01 
09:34:04 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] echauchot commented on pull request #17839: [FLINK-24859] document new connectors formats

2021-12-01 Thread GitBox


echauchot commented on pull request #17839:
URL: https://github.com/apache/flink/pull/17839#issuecomment-983457016


   @MartijnVisser thanks for your review !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise merged pull request #17941: [FLINK-25092][tests][elasticsearch] Refactor test to use artifact cacher

2021-12-01 Thread GitBox


AHeise merged pull request #17941:
URL: https://github.com/apache/flink/pull/17941


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise merged pull request #17942: [FLINK-25093][tests] Refactor test to use retry function for cloning Flink docker repo

2021-12-01 Thread GitBox


AHeise merged pull request #17942:
URL: https://github.com/apache/flink/pull/17942


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17974: [FLINK-15493][test] Add retry rule for all tests based on KafkaTestBase

2021-12-01 Thread GitBox


flinkbot edited a comment on pull request #17974:
URL: https://github.com/apache/flink/pull/17974#issuecomment-983456661


   
   ## CI report:
   
   * da9b379479034541309e880b405d6a7065e55f2d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27345)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] matriv commented on pull request #17771: [FLINK-24813][table-planner]Improve ImplicitTypeConversionITCase

2021-12-01 Thread GitBox


matriv commented on pull request #17771:
URL: https://github.com/apache/flink/pull/17771#issuecomment-983459440


   @xuyangzhong @godfreyhe Is this good to go?
   @xuyangzhong Can you please rebase with master to resolve the conflicts?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25092) Implement artifact cacher for Bash based Elasticsearch test

2021-12-01 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451663#comment-17451663
 ] 

Martijn Visser commented on FLINK-25092:


Fixed in master via: 560502fa93624eb80291dedc5ee288c641b920a2

> Implement artifact cacher for Bash based Elasticsearch test
> ---
>
> Key: FLINK-25092
> URL: https://issues.apache.org/jira/browse/FLINK-25092
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch, Tests
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> The `elasticsearch-common.sh` downloads Elasticsearch for tests and already 
> has a retry feature in it. This should be refactor use the recently 
> introduced common retry function. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] slinkydeveloper commented on a change in pull request #17897: [FLINK-24687][table] Move FileSystemTableSource/Sink in flink-connector-files

2021-12-01 Thread GitBox


slinkydeveloper commented on a change in pull request #17897:
URL: https://github.com/apache/flink/pull/17897#discussion_r760008292



##
File path: flink-table/flink-table-uber/pom.xml
##
@@ -88,6 +88,11 @@ under the License.
flink-cep
${project.version}

+   
+   org.apache.flink

Review comment:
   Uhm you're right, I forgot what the initial issue title was about 
:smiley: 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   7   8   >