[GitHub] [flink] flinkbot edited a comment on pull request #18319: [FLINK-25531][connectors/kafka] Force immediate shutdown of FlinkInternalKafkaProducer to speed up testRetryComittableOnRetriableErro

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18319:
URL: https://github.com/apache/flink/pull/18319#issuecomment-1008898822


   
   ## CI report:
   
   * eb853c67af624764b4e5c0c3fce054a7fdedbf0e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29182)
 
   * 8ad706f3ef9dbe276cd6887f61ed6b19e59af9fe Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29226)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-25266) StreamingKafkaITCase fails with incorrect number of messages

2022-01-10 Thread Fabian Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Paul resolved FLINK-25266.
-
Resolution: Fixed

> StreamingKafkaITCase fails with incorrect number of messages
> 
>
> Key: FLINK-25266
> URL: https://issues.apache.org/jira/browse/FLINK-25266
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Assignee: Fabian Paul
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=27973&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a
> {code}
> Dec 10 17:21:01 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 1, 
> Time elapsed: 163.429 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> Dec 10 17:21:01 [ERROR] StreamingKafkaITCase.testKafka  Time elapsed: 163.421 
> s  <<< ERROR!
> Dec 10 17:21:01 java.io.IOException: Could not read expected number of 
> messages.
> Dec 10 17:21:01   at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.readMessage(LocalStandaloneKafkaResource.java:368)
> Dec 10 17:21:01   at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:186)
> Dec 10 17:21:01   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 10 17:21:01   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 10 17:21:01   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 10 17:21:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Dec 10 17:21:01   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Dec 10 17:21:01   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Dec 10 17:21:01   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Dec 10 17:21:01   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Dec 10 17:21:01   at 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48)
> Dec 10 17:21:01   at 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48)
> Dec 10 17:21:01   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Dec 10 17:21:01   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Dec 10 17:21:01   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Dec 10 17:21:01   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25266) StreamingKafkaITCase fails with incorrect number of messages

2022-01-10 Thread Fabian Paul (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472530#comment-17472530
 ] 

Fabian Paul commented on FLINK-25266:
-

Merged in master: 0bc2234b60d1a0635e238d18990695943158123c

> StreamingKafkaITCase fails with incorrect number of messages
> 
>
> Key: FLINK-25266
> URL: https://issues.apache.org/jira/browse/FLINK-25266
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Assignee: Fabian Paul
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=27973&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a
> {code}
> Dec 10 17:21:01 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 1, 
> Time elapsed: 163.429 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> Dec 10 17:21:01 [ERROR] StreamingKafkaITCase.testKafka  Time elapsed: 163.421 
> s  <<< ERROR!
> Dec 10 17:21:01 java.io.IOException: Could not read expected number of 
> messages.
> Dec 10 17:21:01   at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.readMessage(LocalStandaloneKafkaResource.java:368)
> Dec 10 17:21:01   at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:186)
> Dec 10 17:21:01   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 10 17:21:01   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 10 17:21:01   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 10 17:21:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Dec 10 17:21:01   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Dec 10 17:21:01   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Dec 10 17:21:01   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Dec 10 17:21:01   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Dec 10 17:21:01   at 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48)
> Dec 10 17:21:01   at 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48)
> Dec 10 17:21:01   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Dec 10 17:21:01   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Dec 10 17:21:01   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Dec 10 17:21:01   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] fapaul merged pull request #18105: [FLINK-25266][e2e] Convert StreamingKafkaITCase to SmokeKafkaITCase covering application packaging

2022-01-10 Thread GitBox


fapaul merged pull request #18105:
URL: https://github.com/apache/flink/pull/18105


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25603) make the BlobServer use aws s3

2022-01-10 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-25603.
--
Resolution: Duplicate

> make the BlobServer use aws s3
> --
>
> Key: FLINK-25603
> URL: https://issues.apache.org/jira/browse/FLINK-25603
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.13.3
>Reporter: lupan
>Priority: Major
>
> Currently, blob.storage.directory does not support using aws s3 as 
> storage。details as follows:
> When I use the following configuration:
> {code:java}
> blob.storage.directory: s3://iceberg-bucket/flink/blob {code}
> I get the following error:
> {code:java}
> taskmanager    | 2022-01-11 02:41:11,460 ERROR 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner      [] - Terminating 
> TaskManagerRunner with exit code 1.
> taskmanager    | org.apache.flink.util.FlinkException: Failed to start the 
> TaskManagerRunner.
> taskmanager    |        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:374)
>  ~[flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerProcessSecurely$3(TaskManagerRunner.java:413)
>  ~[flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>  ~[flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerProcessSecurely(TaskManagerRunner.java:413)
>  [flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerProcessSecurely(TaskManagerRunner.java:396)
>  [flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:354)
>  [flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    | Caused by: java.io.IOException: Could not create storage 
> directory for BLOB store in 's3:/iceberg-bucket/flink/blob'.
> taskmanager    |        at 
> org.apache.flink.runtime.blob.BlobUtils.initLocalStorageDirectory(BlobUtils.java:139)
>  ~[flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        at 
> org.apache.flink.runtime.blob.AbstractBlobCache.(AbstractBlobCache.java:89)
>  ~[flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        at 
> org.apache.flink.runtime.blob.PermanentBlobCache.(PermanentBlobCache.java:93)
>  ~[flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        at 
> org.apache.flink.runtime.blob.BlobCacheService.(BlobCacheService.java:55)
>  ~[flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.(TaskManagerRunner.java:169)
>  ~[flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:367)
>  ~[flink-dist_2.12-1.13.3.jar:1.13.3]
> taskmanager    |        ... 5 more{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25584) Azure failed on install node and npm for Flink : Runtime web

2022-01-10 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472528#comment-17472528
 ] 

Till Rohrmann commented on FLINK-25584:
---

cc [~MartijnVisser], [~chesnay]

> Azure failed on install node and npm for Flink : Runtime web
> 
>
> Key: FLINK-25584
> URL: https://issues.apache.org/jira/browse/FLINK-25584
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Runtime / Web Frontend
>Affects Versions: 1.15.0, 1.14.2
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> [INFO] --- frontend-maven-plugin:1.11.0:install-node-and-npm (install node 
> and npm) @ flink-runtime-web ---
> [INFO] Installing node version v12.14.1
> [INFO] Downloading 
> https://nodejs.org/dist/v12.14.1/node-v12.14.1-linux-x64.tar.gz to 
> /__w/1/.m2/repository/com/github/eirslett/node/12.14.1/node-12.14.1-linux-x64.tar.gz
> [INFO] No proxies configured
> [INFO] No proxy was configured, downloading directly
> [INFO] 
> 
> [INFO] Reactor Summary:
> 
> [ERROR] Failed to execute goal 
> com.github.eirslett:frontend-maven-plugin:1.11.0:install-node-and-npm 
> (install node and npm) on project flink-runtime-web: Could not download 
> Node.js: Could not download 
> https://nodejs.org/dist/v12.14.1/node-v12.14.1-linux-x64.tar.gz: Remote host 
> terminated the handshake: SSL peer shut down incorrectly -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-runtime-web
> {code}
> (The refactor summary is omitted)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24456) Support bounded offset in the Kafka table connector

2022-01-10 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472526#comment-17472526
 ] 

Martijn Visser commented on FLINK-24456:


[~monster#12] What do you think is the status of this ticket? The release-1.15 
branch will likely be cut February 4th, do you think it could be merged before 
that time? What's left to do from a PR perspective?

> Support bounded offset in the Kafka table connector
> ---
>
> Key: FLINK-24456
> URL: https://issues.apache.org/jira/browse/FLINK-24456
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: Haohui Mai
>Assignee: ZhuoYu Chen
>Priority: Minor
>  Labels: pull-request-available
>
> The {{setBounded}} API in the DataStream connector of Kafka is particularly 
> useful when writing tests. Unfortunately the table connector of Kafka lacks 
> the same API.
> It would be good to have this API added.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18323: [FLINK-25601][state backends][config] Update the 'state.backend' option

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18323:
URL: https://github.com/apache/flink/pull/18323#issuecomment-1009676741


   
   ## CI report:
   
   * 78bd79178543b399f726115f3ea73ff394e37902 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29225)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18320: [FLINK-25427] Decreased number of tests

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18320:
URL: https://github.com/apache/flink/pull/18320#issuecomment-1008933892


   
   ## CI report:
   
   * cff02a6f62345d451348c8a9646850126f88d3b5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29218)
 
   * 1a6ce4f8c7b287a67eb886a9b186a7d4146348dc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29224)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18319: [FLINK-25531][connectors/kafka] Force immediate shutdown of FlinkInternalKafkaProducer to speed up testRetryComittableOnRetriableErro

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18319:
URL: https://github.com/apache/flink/pull/18319#issuecomment-1008898822


   
   ## CI report:
   
   * eb853c67af624764b4e5c0c3fce054a7fdedbf0e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29182)
 
   * 8ad706f3ef9dbe276cd6887f61ed6b19e59af9fe UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #18126: [FLINK-25036][runtime] Add stage wise scheduling strategy

2022-01-10 Thread GitBox


zhuzhurk commented on a change in pull request #18126:
URL: https://github.com/apache/flink/pull/18126#discussion_r781817552



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/StagewiseSchedulingStrategy.java
##
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler.strategy;
+
+import org.apache.flink.runtime.execution.ExecutionState;
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.DeploymentOption;
+import org.apache.flink.runtime.scheduler.ExecutionVertexDeploymentOption;
+import org.apache.flink.runtime.scheduler.SchedulerOperations;
+import org.apache.flink.runtime.scheduler.SchedulingTopologyListener;
+import org.apache.flink.util.IterableUtils;
+
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * {@link SchedulingStrategy} instance which schedules tasks in granularity of 
stage. Note that this

Review comment:
   I think the granularity is actually vertex instead of stage, because 
vertices are scheduled on by one.
   But there is constraint that vertex can be scheduled only if all the inputs 
of vertices in the same stage are consumable.

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/StagewiseSchedulingStrategy.java
##
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler.strategy;
+
+import org.apache.flink.runtime.execution.ExecutionState;
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.DeploymentOption;
+import org.apache.flink.runtime.scheduler.ExecutionVertexDeploymentOption;
+import org.apache.flink.runtime.scheduler.SchedulerOperations;
+import org.apache.flink.runtime.scheduler.SchedulingTopologyListener;
+import org.apache.flink.util.IterableUtils;
+
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * {@link SchedulingStrategy} instance which schedules tasks in granularity of 
stage. Note that this
+ * strategy only supports ALL_EDGES_BLOCKING batch jobs.
+ */
+public class StagewiseSchedulingStrategy implements SchedulingStrategy, 
SchedulingTopologyListener {
+
+private final SchedulerOperations schedulerOperations;
+
+private final SchedulingTopology schedulingTopology;
+
+private final DeploymentOption deploymentOption = new 
DeploymentOption(false);
+
+private final Set pendingVertices = new 
HashSet<>();
+
+public StagewiseSchedulingStrategy(
+final SchedulerOperations schedulerOperations,
+final SchedulingTopology schedulingTopology) {
+
+this.schedulerOperations = checkNotNull(schedulerOperations);
+this.schedulingTopology = checkNotNull(schedulingTopology);
+schedulingTopology.registerSchedulingTopologyListener(this);
+}
+
+@Override
+public void startScheduling() {

[GitHub] [flink] flinkbot commented on pull request #18323: [FLINK-25601][state backends][config] Update the 'state.backend' option

2022-01-10 Thread GitBox


flinkbot commented on pull request #18323:
URL: https://github.com/apache/flink/pull/18323#issuecomment-1009676741


   
   ## CI report:
   
   * 78bd79178543b399f726115f3ea73ff394e37902 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18320: [FLINK-25427] Decreased number of tests

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18320:
URL: https://github.com/apache/flink/pull/18320#issuecomment-1008933892


   
   ## CI report:
   
   * cff02a6f62345d451348c8a9646850126f88d3b5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29218)
 
   * 1a6ce4f8c7b287a67eb886a9b186a7d4146348dc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18050: [FLINK-25034][runtime] Support flexible number of subpartitions in IntermediateResultPartition

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18050:
URL: https://github.com/apache/flink/pull/18050#issuecomment-988466514


   
   ## CI report:
   
   * e2e89c176f1e20f3583e6bdc850a0c31251cf988 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28556)
 
   * 73c2ecd546c2ad9be7ee20382716edc9f6c6c574 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29223)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23391) KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure

2022-01-10 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472520#comment-17472520
 ] 

Qingsheng Ren commented on FLINK-23391:
---

I'm making a refactor for the Kafka test infra with KafkaContainer in 
FLINK-25249 , which should be able to fix this flaky test. The PR is in review 
now.

> KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure
> ---
>
> Key: FLINK-23391
> URL: https://issues.apache.org/jira/browse/FLINK-23391
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.13.1, 1.15.0
>Reporter: Xintong Song
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20456&view=logs&j=c5612577-f1f7-5977-6ff6-7432788526f7&t=53f6305f-55e6-561c-8f1e-3a1dde2c77df&l=6783
> {code}
> Jul 14 23:00:26 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 99.93 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest
> Jul 14 23:00:26 [ERROR] 
> testKafkaSourceMetrics(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.225 s  <<< ERROR!
> Jul 14 23:00:26 java.util.concurrent.TimeoutException: Offsets are not 
> committed successfully. Dangling offsets: 
> {15213={KafkaSourceReaderTest-0=OffsetAndMetadata{offset=10, 
> leaderEpoch=null, metadata=''}}}
> Jul 14 23:00:26   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
> Jul 14 23:00:26   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testKafkaSourceMetrics(KafkaSourceReaderTest.java:275)
> Jul 14 23:00:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 14 23:00:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 14 23:00:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 14 23:00:26   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 14 23:00:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 14 23:00:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 14 23:00:26   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
> Jul 14 23:00:26   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jul 14 23:00:26   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jul 14 23:00:26   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jul 14 23:00:26   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jul 14 23:00:26   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Jul 14 23:00:26   at org.junit.runners.Suite.runChild(Suite.java:128)
> Jul 14 23:00:26   at org.junit.runners.Suite.runChild(Suite.java:27)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> J

[jira] [Commented] (FLINK-23391) KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure

2022-01-10 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472517#comment-17472517
 ] 

Qingsheng Ren commented on FLINK-23391:
---

[~trohrmann] I'm making a refactor for the Kafka test infra with KafkaContainer 
in FLINK-25249. The PR is in review now

> KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure
> ---
>
> Key: FLINK-23391
> URL: https://issues.apache.org/jira/browse/FLINK-23391
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.13.1, 1.15.0
>Reporter: Xintong Song
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20456&view=logs&j=c5612577-f1f7-5977-6ff6-7432788526f7&t=53f6305f-55e6-561c-8f1e-3a1dde2c77df&l=6783
> {code}
> Jul 14 23:00:26 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 99.93 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest
> Jul 14 23:00:26 [ERROR] 
> testKafkaSourceMetrics(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.225 s  <<< ERROR!
> Jul 14 23:00:26 java.util.concurrent.TimeoutException: Offsets are not 
> committed successfully. Dangling offsets: 
> {15213={KafkaSourceReaderTest-0=OffsetAndMetadata{offset=10, 
> leaderEpoch=null, metadata=''}}}
> Jul 14 23:00:26   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
> Jul 14 23:00:26   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testKafkaSourceMetrics(KafkaSourceReaderTest.java:275)
> Jul 14 23:00:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 14 23:00:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 14 23:00:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 14 23:00:26   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 14 23:00:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 14 23:00:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 14 23:00:26   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
> Jul 14 23:00:26   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jul 14 23:00:26   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jul 14 23:00:26   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jul 14 23:00:26   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jul 14 23:00:26   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Jul 14 23:00:26   at org.junit.runners.Suite.runChild(Suite.java:128)
> Jul 14 23:00:26   at org.junit.runners.Suite.runChild(Suite.java:27)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jul 14 23:00:26   at 
> org.juni

[GitHub] [flink] flinkbot commented on pull request #18323: [FLINK-25601][state backends][config] Update the 'state.backend' option

2022-01-10 Thread GitBox


flinkbot commented on pull request #18323:
URL: https://github.com/apache/flink/pull/18323#issuecomment-1009673644


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 78bd79178543b399f726115f3ea73ff394e37902 (Tue Jan 11 
07:39:31 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23946) Application mode fails fatally when being shut down

2022-01-10 Thread Konstantin Knauf (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472513#comment-17472513
 ] 

Konstantin Knauf commented on FLINK-23946:
--

[~dmvk] This is only missing backports to 1.13 and potentially 1.12, correct?

> Application mode fails fatally when being shut down
> ---
>
> Key: FLINK-23946
> URL: https://issues.apache.org/jira/browse/FLINK-23946
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.12.5, 1.13.2
>Reporter: Till Rohrmann
>Assignee: David Morávek
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.15.0, 1.13.6, 1.14.3
>
>
> The application mode fails fatally when being shut down (e.g. if the 
> {{Dispatcher}} loses its leadership). The problem seems to be that the 
> {{ApplicationDispatcherBootstrap}} cancels the {{applicationExecutionTask}} 
> and {{applicationCompletionFuture}} that can trigger the execution of the 
> fatal exception handler in the handler of the {{applicationCompletionFuture}}.
> I suggest to only call the fatal exception handler if an unexpected exception 
> occurs in the {{applicationCompletionFuture}} callback.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25427) SavepointITCase.testTriggerSavepointAndResumeWithNoClaim fails on AZP

2022-01-10 Thread Anton Kalashnikov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472511#comment-17472511
 ] 

Anton Kalashnikov commented on FLINK-25427:
---

[~trohrmann], yes, I have a little progress. I successfully isolated 
`testTriggerSavepointAndResumeWithNoClaim` and right now, I at least have the 
explicit exception:
{noformat}
06:54:23,926 [flink-akka.actor.default-dispatcher-17] WARN  
org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager [] 
- Could not fulfill resource requirements of job 
0c6696a94a9c5efb3fa74a58f23cbb08. Free slots: 0
06:54:24,627 [flink-akka.actor.default-dispatcher-18] INFO  
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler [] - Failed to go 
from CreatingExecutionGraph to Executing because the ExecutionGraph cre
ation failed.
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not 
enough resources available for scheduling.
at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.lambda$determineParallelism$21(AdaptiveScheduler.java:743)
 ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT]
at java.util.Optional.orElseThrow(Optional.java:290) ~[?:1.8.0_292]
at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.determineParallelism(AdaptiveScheduler.java:741)
 ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.createExecutionGraphWithAvailableResourcesAsync(AdaptiveScheduler.java:915)
 ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.goToCreatingExecutionGraph(AdaptiveScheduler.java:902)
 ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.scheduler.adaptive.WaitingForResources.createExecutionGraphWithAvailableResources(WaitingForResources.java:178)
 ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.scheduler.adaptive.WaitingForResources.resourceTimeout(WaitingForResources.java:174)
 ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.runIfState(AdaptiveScheduler.java:1106)
 ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.lambda$runIfState$26(AdaptiveScheduler.java:1121)
 ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_292]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_292]
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:455)
 ~[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:455)
 ~[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:213)
 ~[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
 ~[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
 ~[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) 
[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) 
[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) 
[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-SNAPSHOT]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) 
[flink-rpc-akka_0433dc93-b31b-44d6-8efc-777bd9bd7aa2.jar:1.15-S

[GitHub] [flink] flinkbot edited a comment on pull request #18050: [FLINK-25034][runtime] Support flexible number of subpartitions in IntermediateResultPartition

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18050:
URL: https://github.com/apache/flink/pull/18050#issuecomment-988466514


   
   ## CI report:
   
   * e2e89c176f1e20f3583e6bdc850a0c31251cf988 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28556)
 
   * 73c2ecd546c2ad9be7ee20382716edc9f6c6c574 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25601) Update 'state.backend' in flink-conf.yaml

2022-01-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25601:
---
Labels: pull-request-available  (was: )

> Update 'state.backend' in flink-conf.yaml
> -
>
> Key: FLINK-25601
> URL: https://issues.apache.org/jira/browse/FLINK-25601
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.14.2
>Reporter: Ada Wong
>Assignee: Ada Wong
>Priority: Major
>  Labels: pull-request-available
>
> The value and comments of 'state.backend' in flink-conf.yaml is deprecated.
> {code:java}
> # Supported backends are 'jobmanager', 'filesystem', 'rocksdb', or the
> # .
> #
> # state.backend: filesystem{code}
> We should update to this following.
>  
> {code:java}
> # Supported backends are 'hashmap', 'rocksdb', or the
> # .
> #
> # state.backend: hashmap {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] deadwind4 opened a new pull request #18323: [FLINK-25601][state backends][config] Update the 'state.backend' option

2022-01-10 Thread GitBox


deadwind4 opened a new pull request #18323:
URL: https://github.com/apache/flink/pull/18323


   ## What is the purpose of the change
   
   The value and comments of 'state.backend' in flink-conf.yaml is deprecated.
   
   ## Brief change log
   
 - *Change the 'state.backend' option to new value*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-25604) Remove useless aggregate function

2022-01-10 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-25604:
--

Assignee: godfrey he

> Remove useless aggregate function 
> --
>
> Key: FLINK-25604
> URL: https://issues.apache.org/jira/browse/FLINK-25604
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: godfrey he
>Priority: Minor
>
> We expect useless aggregate call could be removed after projection push down.
> But sometimes, planner is unexpected. For example,
> {code:sql}
> SELECT 
>   d
> FROM (
>   SELECT
> d,
> c,
> row_number() OVER (PARTITION BY d ORDER BY e desc)  review_rank
>   FROM (
> SELECT e, d, max(f) AS c FROM Table5 GROUP BY e, d)
>   )
> WHERE review_rank = 1
> {code}
> The plan is 
> {code:java}
> Calc(select=[d], where=[=(w0$o0, 1:BIGINT)])
> +- OverAggregate(partitionBy=[d], orderBy=[e DESC], window#0=[ROW_NUMBER(*) 
> AS w0$o0 ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW], select=[e, d, c, 
> w0$o0])
>+- Sort(orderBy=[d ASC, e DESC])
>   +- Exchange(distribution=[hash[d]])
>  +- HashAggregate(isMerge=[true], groupBy=[e, d], select=[e, d, 
> Final_MAX(max$0) AS c])
> +- Exchange(distribution=[hash[e, d]])
>+- LocalHashAggregate(groupBy=[e, d], select=[e, d, 
> Partial_MAX(f) AS max$0])
>   +- Calc(select=[e, d, f])
>  +- BoundedStreamScan(table=[[default_catalog, 
> default_database, Table5]], fields=[d, e, f, g, h])
> {code}
> In the above sql, max(c) could be removed because it is projected out before 
> sink.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   * add3e18c345cdb7f5268feaa130990b449d4ecc1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29221)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-23843) Exceptions during "SplitEnumeratorContext.runInCoordinatorThread()" should cause Global Failure instead of Process Kill

2022-01-10 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472136#comment-17472136
 ] 

Etienne Chauchot edited comment on FLINK-23843 at 1/11/22, 7:28 AM:


What is the expected behavior in case an exception is raised in the Runnable? 

Should we catch it, log it and ignore it ? I guess not or more likely that 
depends on the exception. So we should provide a way for the API user (source 
connector authors) to define a callback to react depending on the exception. 
Catching the exception at the executor level does not prevent the executor to 
try to instanciate a new thread which is prohibited by the factory. So that 
leaves:
- either catch in the Runnable: so in that case there is not much more we can 
do than simply write a javadoc for source authors to tell them to catch their 
exceptions in their Runnable when using 
_SplitEnumeratorContext.runInCoordinatorThread(Runnable)_.
- or we allow the API user to provide his own _Thread.UncaughtExceptionHandler_ 
to handle exceptions. This is better IMHO but this could be dangerous because 
ignoring an exception could lead to incoherent state so adding a strong javadoc 
on the exception handle seems mandatory.
[~trohrmann] [~sewen] [~chesnay] WDYT?


was (Author: echauchot):
What is the expected behavior in case an exception is raised in the Runnable? 

Should we catch it, log it and ignore it ? I guess not or more likely that 
depends on the exception. So we should provide a way for the API user (source 
connector authors) to define a callback to react depending on the exception. 
Catching the exception at the executor level does not prevent the executor to 
try to instanciate a new thread which is prohibited by the factory. So that 
leaves:
- either catch in the Runnable: so in that case there is not much more we can 
do than simply write a javadoc for source authors to tell them to catch their 
exceptions in their Runnable when using 
_SplitEnumeratorContext.runInCoordinatorThread(Runnable)_.
- or we allow the API user to provide his own _Thread.UncaughtExceptionHandler_ 
to handle exceptions. This is better IMHO but this could be dangerous because 
ignoring an exception could lead to incoherent state so adding a strong javadoc 
on the exception handle seems mandatory.
~trohrmann] [~sewen] [~chesnay] WDYT?

> Exceptions during "SplitEnumeratorContext.runInCoordinatorThread()" should 
> cause Global Failure instead of Process Kill
> ---
>
> Key: FLINK-23843
> URL: https://issues.apache.org/jira/browse/FLINK-23843
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.2
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Critical
> Fix For: 1.15.0
>
>
> Currently, when a the method 
> "SplitEnumeratorContext.runInCoordinatorThread()" throws an exception, the 
> effect is a process kill of the JobManager process.
> The chain how the process kill happens is:
> * An exception bubbling up in the executor, killing the executor thread
> * The executor starts a replacement thread, which is forbidden by the thread 
> factory (as a safety net) and causes a process kill.
> We should prevent such exceptions from bubbling up in the coordinator 
> executor.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-23843) Exceptions during "SplitEnumeratorContext.runInCoordinatorThread()" should cause Global Failure instead of Process Kill

2022-01-10 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472136#comment-17472136
 ] 

Etienne Chauchot edited comment on FLINK-23843 at 1/11/22, 7:28 AM:


What is the expected behavior in case an exception is raised in the Runnable? 

Should we catch it, log it and ignore it ? I guess not or more likely that 
depends on the exception. So we should provide a way for the API user (source 
connector authors) to define a callback to react depending on the exception. 
Catching the exception at the executor level does not prevent the executor to 
try to instanciate a new thread which is prohibited by the factory. So that 
leaves:
- either catch in the Runnable: so in that case there is not much more we can 
do than simply write a javadoc for source authors to tell them to catch their 
exceptions in their Runnable when using 
_SplitEnumeratorContext.runInCoordinatorThread(Runnable)_.
- or we allow the API user to provide his own _Thread.UncaughtExceptionHandler_ 
to handle exceptions. This is better IMHO but this could be dangerous because 
ignoring an exception could lead to incoherent state so adding a strong javadoc 
on the exception handle seems mandatory.
~trohrmann] [~sewen] [~chesnay] WDYT?


was (Author: echauchot):
What is the expected behavior in case an exception is raised in the Runnable? 

Should we catch it, log it and ignore it ? I guess not or more likely that 
depends on the exception. So we should provide a way for the API user (source 
connector authors) to define a callback to react depending on the exception. 
Catching the exception at the executor level does not prevent the executor to 
try to instanciate a new thread which is prohibited by the factory. So that 
leaves:
- either catch in the Runnable: so in that case there is not much more we can 
do than simply write a javadoc for source authors to tell them to catch their 
exceptions in their Runnable when using 
_SplitEnumeratorContext.runInCoordinatorThread(Runnable)_.
- or we allow the API user to provide his own _Thread.UncaughtExceptionHandler_ 
to handle exceptions. This is better IMHO but this could be dangerous because 
ignoring an exception could lead to incoherent state so adding a strong javadoc 
on the exception handle seems mandatory.
=> WDYT ?

> Exceptions during "SplitEnumeratorContext.runInCoordinatorThread()" should 
> cause Global Failure instead of Process Kill
> ---
>
> Key: FLINK-23843
> URL: https://issues.apache.org/jira/browse/FLINK-23843
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.2
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Critical
> Fix For: 1.15.0
>
>
> Currently, when a the method 
> "SplitEnumeratorContext.runInCoordinatorThread()" throws an exception, the 
> effect is a process kill of the JobManager process.
> The chain how the process kill happens is:
> * An exception bubbling up in the executor, killing the executor thread
> * The executor starts a replacement thread, which is forbidden by the thread 
> factory (as a safety net) and causes a process kill.
> We should prevent such exceptions from bubbling up in the coordinator 
> executor.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-23843) Exceptions during "SplitEnumeratorContext.runInCoordinatorThread()" should cause Global Failure instead of Process Kill

2022-01-10 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472506#comment-17472506
 ] 

Etienne Chauchot commented on FLINK-23843:
--

[~trohrmann] [~sewen] [~chesnay] WDYT?

> Exceptions during "SplitEnumeratorContext.runInCoordinatorThread()" should 
> cause Global Failure instead of Process Kill
> ---
>
> Key: FLINK-23843
> URL: https://issues.apache.org/jira/browse/FLINK-23843
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.2
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Critical
> Fix For: 1.15.0
>
>
> Currently, when a the method 
> "SplitEnumeratorContext.runInCoordinatorThread()" throws an exception, the 
> effect is a process kill of the JobManager process.
> The chain how the process kill happens is:
> * An exception bubbling up in the executor, killing the executor thread
> * The executor starts a replacement thread, which is forbidden by the thread 
> factory (as a safety net) and causes a process kill.
> We should prevent such exceptions from bubbling up in the coordinator 
> executor.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] (FLINK-23843) Exceptions during "SplitEnumeratorContext.runInCoordinatorThread()" should cause Global Failure instead of Process Kill

2022-01-10 Thread Etienne Chauchot (Jira)


[ https://issues.apache.org/jira/browse/FLINK-23843 ]


Etienne Chauchot deleted comment on FLINK-23843:
--

was (Author: echauchot):
[~trohrmann] [~sewen] [~chesnay] WDYT?

> Exceptions during "SplitEnumeratorContext.runInCoordinatorThread()" should 
> cause Global Failure instead of Process Kill
> ---
>
> Key: FLINK-23843
> URL: https://issues.apache.org/jira/browse/FLINK-23843
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.2
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Critical
> Fix For: 1.15.0
>
>
> Currently, when a the method 
> "SplitEnumeratorContext.runInCoordinatorThread()" throws an exception, the 
> effect is a process kill of the JobManager process.
> The chain how the process kill happens is:
> * An exception bubbling up in the executor, killing the executor thread
> * The executor starts a replacement thread, which is forbidden by the thread 
> factory (as a safety net) and causes a process kill.
> We should prevent such exceptions from bubbling up in the coordinator 
> executor.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25495) Client support attach mode when using the deployment of application mode

2022-01-10 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472504#comment-17472504
 ] 

Yang Wang commented on FLINK-25495:
---

I agree that the per-job with attach mode also has such limitation. So what I 
am suggesting is to check whether the job has finished via JobManager rest 
client. Maybe FLINK-24113 could help if the JobManager terminates too fast.

> Client support attach mode when using the deployment of application mode
> 
>
> Key: FLINK-25495
> URL: https://issues.apache.org/jira/browse/FLINK-25495
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Reporter: Junfan Zhang
>Priority: Major
>
> h2. Why
> In our internal data platform, we support flink batch and streaming job 
> submission. To reduce the submission worker overload, we use the Flink 
> application mode to submit flink job. It's a nice feature!
> However, on batch mode, we hope flink client couldn't exit until the batch 
> application finished (No need to get job result, just wait). Now the flink 
> lack this feature, and it is not stated in the document that Application Mode 
> does not support attach.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25593) A redundant scan could be skipped if it is an input of join and the other input is empty after partition prune

2022-01-10 Thread Jing Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang updated FLINK-25593:
---
Description: 
A redundant scan could be skipped if it is an input of join and the other input 
is empty after partition prune.
For example:
ltable has two partitions: pt=0 ad pt=1, rtable has one partition pt1=0.
The schema of ltable is (lkey string, value int).
The schema of rtable is (rkey string, value int).

{code:sql}
SELECT * FROM ltable, rtable WHERE pt=2 and pt1=0 and `lkey`=rkey
{code}

The plan is as following.

{code:java}
Calc(select=[lkey, value, CAST(2 AS INTEGER) AS pt, rkey, value1, CAST(0 AS 
INTEGER) AS pt1])
+- HashJoin(joinType=[InnerJoin], where=[=(lkey, rkey)], select=[lkey, value, 
rkey, value1], build=[right])
   :- Exchange(distribution=[hash[lkey]])
   :  +- TableSourceScan(table=[[hive, source_db, ltable, partitions=[], 
project=[lkey, value]]], fields=[lkey, value])
   +- Exchange(distribution=[hash[rkey]])
  +- TableSourceScan(table=[[hive, source_db, rtable, partitions=[{pt1=0}], 
project=[rkey, value1]]], fields=[rkey, value1])
{code}

There is no need to scan right side because the left input of join has 0 
partitions after partition prune.


  was:
A redundant scan could be skipped if it is an input of join and the other input 
is empty after partition prune.
For example:
ltable has two partitions: pt=0 ad pt=1, rtable has one partition pt1=0.
The schema of ltable is (lkey string, value int).
The schema of rtable is (rkey string, value int).

{code:java}
SELECT * FROM ltable, rtable WHERE pt=2 and pt1=0 and `lkey`=rkey
{code}

The plan is as following.

{code:java}
Calc(select=[lkey, value, CAST(2 AS INTEGER) AS pt, rkey, value1, CAST(0 AS 
INTEGER) AS pt1])
+- HashJoin(joinType=[InnerJoin], where=[=(lkey, rkey)], select=[lkey, value, 
rkey, value1], build=[right])
   :- Exchange(distribution=[hash[lkey]])
   :  +- TableSourceScan(table=[[hive, source_db, ltable, partitions=[], 
project=[lkey, value]]], fields=[lkey, value])
   +- Exchange(distribution=[hash[rkey]])
  +- TableSourceScan(table=[[hive, source_db, rtable, partitions=[{pt1=0}], 
project=[rkey, value1]]], fields=[rkey, value1])
{code}

There is no need to scan right side because the left input of join has 0 
partitions after partition prune.



> A redundant scan could be skipped if it is an input of join and the other 
> input is empty after partition prune
> --
>
> Key: FLINK-25593
> URL: https://issues.apache.org/jira/browse/FLINK-25593
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Priority: Major
>
> A redundant scan could be skipped if it is an input of join and the other 
> input is empty after partition prune.
> For example:
> ltable has two partitions: pt=0 ad pt=1, rtable has one partition pt1=0.
> The schema of ltable is (lkey string, value int).
> The schema of rtable is (rkey string, value int).
> {code:sql}
> SELECT * FROM ltable, rtable WHERE pt=2 and pt1=0 and `lkey`=rkey
> {code}
> The plan is as following.
> {code:java}
> Calc(select=[lkey, value, CAST(2 AS INTEGER) AS pt, rkey, value1, CAST(0 AS 
> INTEGER) AS pt1])
> +- HashJoin(joinType=[InnerJoin], where=[=(lkey, rkey)], select=[lkey, value, 
> rkey, value1], build=[right])
>:- Exchange(distribution=[hash[lkey]])
>:  +- TableSourceScan(table=[[hive, source_db, ltable, partitions=[], 
> project=[lkey, value]]], fields=[lkey, value])
>+- Exchange(distribution=[hash[rkey]])
>   +- TableSourceScan(table=[[hive, source_db, rtable, 
> partitions=[{pt1=0}], project=[rkey, value1]]], fields=[rkey, value1])
> {code}
> There is no need to scan right side because the left input of join has 0 
> partitions after partition prune.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25604) Remove useless aggregate function

2022-01-10 Thread Jing Zhang (Jira)
Jing Zhang created FLINK-25604:
--

 Summary: Remove useless aggregate function 
 Key: FLINK-25604
 URL: https://issues.apache.org/jira/browse/FLINK-25604
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Jing Zhang


We expect useless aggregate call could be removed after projection push down.
But sometimes, planner is unexpected. For example,

{code:sql}
SELECT 
  d
FROM (
  SELECT
d,
c,
row_number() OVER (PARTITION BY d ORDER BY e desc)  review_rank
  FROM (
SELECT e, d, max(f) AS c FROM Table5 GROUP BY e, d)
  )
WHERE review_rank = 1
{code}

The plan is 

{code:java}
Calc(select=[d], where=[=(w0$o0, 1:BIGINT)])
+- OverAggregate(partitionBy=[d], orderBy=[e DESC], window#0=[ROW_NUMBER(*) AS 
w0$o0 ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW], select=[e, d, c, 
w0$o0])
   +- Sort(orderBy=[d ASC, e DESC])
  +- Exchange(distribution=[hash[d]])
 +- HashAggregate(isMerge=[true], groupBy=[e, d], select=[e, d, 
Final_MAX(max$0) AS c])
+- Exchange(distribution=[hash[e, d]])
   +- LocalHashAggregate(groupBy=[e, d], select=[e, d, 
Partial_MAX(f) AS max$0])
  +- Calc(select=[e, d, f])
 +- BoundedStreamScan(table=[[default_catalog, 
default_database, Table5]], fields=[d, e, f, g, h])
{code}

In the above sql, max(c) could be removed because it is projected out before 
sink.
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18322: [FLINK-25589][docs][connectors] Update Chinese version of Elasticsearch connector docs

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18322:
URL: https://github.com/apache/flink/pull/18322#issuecomment-1009605866


   
   ## CI report:
   
   * 310b62a595a1af2d091b6a0f2e8fc137460b92e7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29216)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25601) Update 'state.backend' in flink-conf.yaml

2022-01-10 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472503#comment-17472503
 ] 

Yun Tang commented on FLINK-25601:
--

[~ana4] Already assigned to you, please go ahead.

> Update 'state.backend' in flink-conf.yaml
> -
>
> Key: FLINK-25601
> URL: https://issues.apache.org/jira/browse/FLINK-25601
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.14.2
>Reporter: Ada Wong
>Assignee: Ada Wong
>Priority: Major
>
> The value and comments of 'state.backend' in flink-conf.yaml is deprecated.
> {code:java}
> # Supported backends are 'jobmanager', 'filesystem', 'rocksdb', or the
> # .
> #
> # state.backend: filesystem{code}
> We should update to this following.
>  
> {code:java}
> # Supported backends are 'hashmap', 'rocksdb', or the
> # .
> #
> # state.backend: hashmap {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25601) Update 'state.backend' in flink-conf.yaml

2022-01-10 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang reassigned FLINK-25601:


Assignee: Ada Wong

> Update 'state.backend' in flink-conf.yaml
> -
>
> Key: FLINK-25601
> URL: https://issues.apache.org/jira/browse/FLINK-25601
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.14.2
>Reporter: Ada Wong
>Assignee: Ada Wong
>Priority: Major
>
> The value and comments of 'state.backend' in flink-conf.yaml is deprecated.
> {code:java}
> # Supported backends are 'jobmanager', 'filesystem', 'rocksdb', or the
> # .
> #
> # state.backend: filesystem{code}
> We should update to this following.
>  
> {code:java}
> # Supported backends are 'hashmap', 'rocksdb', or the
> # .
> #
> # state.backend: hashmap {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18215: [FLINK-25392][table-planner]Support new StatementSet syntax in planner and parser

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18215:
URL: https://github.com/apache/flink/pull/18215#issuecomment-1001883080


   
   ## CI report:
   
   * d5dfa7d9c3bc5607e96b9ccfbeb6ce1a00bc13ca Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29215)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17601: [FLINK-24697][flink-connectors-kafka] add auto.offset.reset configuration for group-offsets startup mode

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #17601:
URL: https://github.com/apache/flink/pull/17601#issuecomment-954546978


   
   ## CI report:
   
   * 493bad4a9158fd2e5e83b6a0eb416cc96633be05 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29214)
 
   * 4ea3c9f8fd437a0e6d183bfb62101f54f89de34e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29219)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24892) FLIP-187: Adaptive Batch Scheduler

2022-01-10 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-24892:

Fix Version/s: 1.15.0

> FLIP-187: Adaptive Batch Scheduler
> --
>
> Key: FLINK-24892
> URL: https://issues.apache.org/jira/browse/FLINK-24892
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Reporter: Lijie Wang
>Assignee: Lijie Wang
>Priority: Major
> Fix For: 1.15.0
>
>
> Introduce a new scheduler to Flink: adaptive batch scheduler. The new 
> scheduler can automatically decide parallelisms of job vertices for batch 
> jobs, according to the size of data volume each vertex needs to process.
> More details see 
> [FLIP-187.|https://cwiki.apache.org/confluence/display/FLINK/FLIP-187%3A+Adaptive+Batch+Job+Scheduler]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18320: [FLINK-25427] Decreased number of tests

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18320:
URL: https://github.com/apache/flink/pull/18320#issuecomment-1008933892


   
   ## CI report:
   
   * cff02a6f62345d451348c8a9646850126f88d3b5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29218)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25584) Azure failed on install node and npm for Flink : Runtime web

2022-01-10 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25584:

Priority: Critical  (was: Major)

> Azure failed on install node and npm for Flink : Runtime web
> 
>
> Key: FLINK-25584
> URL: https://issues.apache.org/jira/browse/FLINK-25584
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Runtime / Web Frontend
>Affects Versions: 1.15.0, 1.14.2
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> [INFO] --- frontend-maven-plugin:1.11.0:install-node-and-npm (install node 
> and npm) @ flink-runtime-web ---
> [INFO] Installing node version v12.14.1
> [INFO] Downloading 
> https://nodejs.org/dist/v12.14.1/node-v12.14.1-linux-x64.tar.gz to 
> /__w/1/.m2/repository/com/github/eirslett/node/12.14.1/node-12.14.1-linux-x64.tar.gz
> [INFO] No proxies configured
> [INFO] No proxy was configured, downloading directly
> [INFO] 
> 
> [INFO] Reactor Summary:
> 
> [ERROR] Failed to execute goal 
> com.github.eirslett:frontend-maven-plugin:1.11.0:install-node-and-npm 
> (install node and npm) on project flink-runtime-web: Could not download 
> Node.js: Could not download 
> https://nodejs.org/dist/v12.14.1/node-v12.14.1-linux-x64.tar.gz: Remote host 
> terminated the handshake: SSL peer shut down incorrectly -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-runtime-web
> {code}
> (The refactor summary is omitted)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25584) Azure failed on install node and npm for Flink : Runtime web

2022-01-10 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25584:

Affects Version/s: 1.14.2

> Azure failed on install node and npm for Flink : Runtime web
> 
>
> Key: FLINK-25584
> URL: https://issues.apache.org/jira/browse/FLINK-25584
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Runtime / Web Frontend
>Affects Versions: 1.15.0, 1.14.2
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> [INFO] --- frontend-maven-plugin:1.11.0:install-node-and-npm (install node 
> and npm) @ flink-runtime-web ---
> [INFO] Installing node version v12.14.1
> [INFO] Downloading 
> https://nodejs.org/dist/v12.14.1/node-v12.14.1-linux-x64.tar.gz to 
> /__w/1/.m2/repository/com/github/eirslett/node/12.14.1/node-12.14.1-linux-x64.tar.gz
> [INFO] No proxies configured
> [INFO] No proxy was configured, downloading directly
> [INFO] 
> 
> [INFO] Reactor Summary:
> 
> [ERROR] Failed to execute goal 
> com.github.eirslett:frontend-maven-plugin:1.11.0:install-node-and-npm 
> (install node and npm) on project flink-runtime-web: Could not download 
> Node.js: Could not download 
> https://nodejs.org/dist/v12.14.1/node-v12.14.1-linux-x64.tar.gz: Remote host 
> terminated the handshake: SSL peer shut down incorrectly -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-runtime-web
> {code}
> (The refactor summary is omitted)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25498) FlinkKafkaProducerITCase. testRestoreToCheckpointAfterExceedingProducersPool failed on azure

2022-01-10 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25498:

Description: 
{code:java}
2021-12-31T08:14:15.9176809Z Dec 31 08:14:15 java.lang.AssertionError: Expected 
elements: <[42]>, but was: elements: <[42, 42, 42, 42]>
2021-12-31T08:14:15.9177351Z Dec 31 08:14:15at 
org.junit.Assert.fail(Assert.java:89)
2021-12-31T08:14:15.9177963Z Dec 31 08:14:15at 
org.apache.flink.streaming.connectors.kafka.KafkaTestBase.assertExactlyOnceForTopic(KafkaTestBase.java:331)
2021-12-31T08:14:15.9183636Z Dec 31 08:14:15at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testRestoreToCheckpointAfterExceedingProducersPool(FlinkKafkaProducerITCase.java:159)
2021-12-31T08:14:15.9184980Z Dec 31 08:14:15at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2021-12-31T08:14:15.9185908Z Dec 31 08:14:15at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2021-12-31T08:14:15.9186954Z Dec 31 08:14:15at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2021-12-31T08:14:15.9187888Z Dec 31 08:14:15at 
java.lang.reflect.Method.invoke(Method.java:498)
2021-12-31T08:14:15.9188819Z Dec 31 08:14:15at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
2021-12-31T08:14:15.9189826Z Dec 31 08:14:15at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2021-12-31T08:14:15.9191068Z Dec 31 08:14:15at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
2021-12-31T08:14:15.9191923Z Dec 31 08:14:15at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2021-12-31T08:14:15.9193167Z Dec 31 08:14:15at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2021-12-31T08:14:15.9193891Z Dec 31 08:14:15at 
org.apache.flink.testutils.junit.RetryRule$RetryOnFailureStatement.evaluate(RetryRule.java:135)
2021-12-31T08:14:15.9194516Z Dec 31 08:14:15at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
2021-12-31T08:14:15.9195078Z Dec 31 08:14:15at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
2021-12-31T08:14:15.9195616Z Dec 31 08:14:15at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2021-12-31T08:14:15.9196194Z Dec 31 08:14:15at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
2021-12-31T08:14:15.9196762Z Dec 31 08:14:15at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
2021-12-31T08:14:15.9197389Z Dec 31 08:14:15at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
2021-12-31T08:14:15.9197988Z Dec 31 08:14:15at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
2021-12-31T08:14:15.9198818Z Dec 31 08:14:15at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
2021-12-31T08:14:15.9199544Z Dec 31 08:14:15at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
2021-12-31T08:14:15.9200367Z Dec 31 08:14:15at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
2021-12-31T08:14:15.9200914Z Dec 31 08:14:15at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
2021-12-31T08:14:15.9201465Z Dec 31 08:14:15at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
2021-12-31T08:14:15.9202369Z Dec 31 08:14:15at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2021-12-31T08:14:15.9203399Z Dec 31 08:14:15at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2021-12-31T08:14:15.9203973Z Dec 31 08:14:15at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
2021-12-31T08:14:15.9204504Z Dec 31 08:14:15at 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2021-12-31T08:14:15.9205027Z Dec 31 08:14:15at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2021-12-31T08:14:15.9205549Z Dec 31 08:14:15at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
2021-12-31T08:14:15.9206053Z Dec 31 08:14:15at 
org.junit.runner.JUnitCore.run(JUnitCore.java:137)
2021-12-31T08:14:15.9206540Z Dec 31 08:14:15at 
org.junit.runner.JUnitCore.run(JUnitCore.java:115)
2021-12-31T08:14:15.9207069Z Dec 31 08:14:15at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
2021-12-31T08:14:15.9207661Z Dec 31 08:14:15at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
2021-12-31T08:14:15.9208412Z Dec 31 08:14:15at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
2021-12-31T08:14:15.9208957Z Dec 31 08:14:15at 
java.util.Iterator.forEachRemaining(Iterator.java:116)
2021-12-31T08:14:15.92

[GitHub] [flink] flinkbot edited a comment on pull request #18296: [FLINK-25486][Runtime/Coordination] Fix the bug that flink will lost state when zookeeper leader changes

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18296:
URL: https://github.com/apache/flink/pull/18296#issuecomment-1007312371


   
   ## CI report:
   
   * ac842b597501d4f7d53967fb3f12e5288ea65674 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29213)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16207: [FLINK-23031][table-runtime]Make early processing time trigger d'ont repeat itself when no new element come in

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #16207:
URL: https://github.com/apache/flink/pull/16207#issuecomment-864350218


   
   ## CI report:
   
   * 233d84facbe69347c2f66952024485c8a61492f4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29212)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25032) Allow to create execution vertices and execution edges lazily

2022-01-10 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-25032.
---
Fix Version/s: 1.15.0
   Resolution: Done

master/1.15:
d2bfe404e199f98115df419232399bcc11b59e5a
3b1f15c7b53b4dfa55b0cda9e6da75d565034372
47a520becd6b6bc7caf03248572fc38f03aa6c61

> Allow to create execution vertices and execution edges lazily
> -
>
> Key: FLINK-25032
> URL: https://issues.apache.org/jira/browse/FLINK-25032
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Lijie Wang
>Assignee: Lijie Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> For a dynamic graph, its execution vertices and execution edges should be 
> lazily created.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zhuzhurk closed pull request #18023: [FLINK-25032] Allow to create execution vertices and execution edges lazily

2022-01-10 Thread GitBox


zhuzhurk closed pull request #18023:
URL: https://github.com/apache/flink/pull/18023


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #18302: [FLINK-25569][core] Add decomposed Sink V2 interface

2022-01-10 Thread GitBox


gaoyunhaii commented on a change in pull request #18302:
URL: https://github.com/apache/flink/pull/18302#discussion_r781797292



##
File path: 
flink-core/src/main/java/org/apache/flink/api/connector/sink2/Committer.java
##
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.connector.sink2;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+import java.io.IOException;
+import java.util.Collection;
+
+/**
+ * The {@code Committer} is responsible for committing the data staged by the 
{@link
+ * TwoPhaseCommittingSink.PrecommittingSinkWriter} in the second step of a 2pc 
protocol.
+ *
+ * A commit must be idempotent: If some failure occurs in Flink during 
commit phase, Flink will
+ * restart from previous checkpoint and re-attempt to commit all committables. 
Thus, some or all
+ * committables may have already been committed. These {@link CommitRequest}s 
must not change the
+ * external system and implementers are asked to signal {@link 
CommitRequest#alreadyCommitted()}.
+ *
+ * @param  The type of information needed to commit the staged data
+ */
+@PublicEvolving
+public interface Committer extends AutoCloseable {
+/**
+ * Commit the given list of {@link CommT}.
+ *
+ * @param committables A list of commit requests staged by the sink writer.
+ * @throws IOException for reasons that may yield a complete restart of 
the job.
+ */
+void commit(Collection> committables)
+throws IOException, InterruptedException;
+
+/**
+ * A request to commit a specific committable.
+ *
+ * @param 
+ */
+@PublicEvolving
+interface CommitRequest {
+
+/** Returns the committable. */
+CommT getCommittable();
+
+/**
+ * Returns how often this particular committable has been retried. 
Starts at 0 for the first
+ * attempt.
+ */
+int getNumberOfRetries();
+
+/**
+ * The commit failed for known reason and should not be retried.
+ *
+ * Currently calling this method only logs the error, discards the 
comittable and
+ * continues. In the future the behaviour might be configurable.
+ */
+void failedWithKnownReason(Throwable t);
+
+/**
+ * The commit failed for unknown reason and should not be retried.
+ *
+ * Currently calling this method fails the job. In the future the 
behaviour might be
+ * configurable.
+ */
+void failedWithUnknownReason(Throwable t);
+
+/**
+ * The commit failed for a retriable reason. If the sink supports a 
retry maximum, this may
+ * permanently fail after reaching that maximum. Else the committable 
will be retried as
+ * long as this method is invoked after each attempt.
+ */
+void retryLater();
+
+/**
+ * Updates the underlying committable and retries later (see {@link 
#retryLater()} for a
+ * description). This method can be used if a committable partially 
succeeded.
+ */
+void updateAndRetryLater(CommT committable);
+
+/**
+ * Signals that a committable is skipped as it was committed already 
in a previous run. Use
+ * of this method is optional but eases bookkeeping and debugging. It 
also serves as a code
+ * documentation for the branches dealing with recovery.
+ */
+void alreadyCommitted();

Review comment:
   I think the main issue here with `failedWithKnownReason`, 
`failedWithUnknownReason` and `alreadyCommitted` is that they are not verb? 
Which looks to me more like a get method that query the state of the 
`CommitRequest` that if it is failed with known / unknown reason or is already 
committted.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18312: Update copyright year to 2022 for NOTICE files.

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18312:
URL: https://github.com/apache/flink/pull/18312#issuecomment-1008686402


   
   ## CI report:
   
   * 052f42bf3e600e915fc7ec97cdfe36686c07eca7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29211)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18312: Update copyright year to 2022 for NOTICE files.

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18312:
URL: https://github.com/apache/flink/pull/18312#issuecomment-1008686402


   
   ## CI report:
   
   * 47950aefbb787649083d9e6f496c6df9dd297169 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29210)
 
   * 052f42bf3e600e915fc7ec97cdfe36686c07eca7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29211)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25592) Improvement of parser, optimizer and execution for Flink Batch SQL

2022-01-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-25592:

Priority: Major  (was: Minor)

> Improvement of parser, optimizer and execution for Flink Batch SQL
> --
>
> Key: FLINK-25592
> URL: https://issues.apache.org/jira/browse/FLINK-25592
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner, Table SQL / Runtime
>Reporter: Jing Zhang
>Priority: Major
>
> This is a parent JIRA to track improvements on Flink Batch SQL, including 
> parser, optimizer and execution.
> For example,
> 1. using Hive dialect and default dialect, some sql query would be translated 
> into different plans
> 2. specify hash/sort aggregate strategy and hash/sort merge join strategy in 
> sql hint
> 3. take parquet metadata into consideration in optimization
> 4. and so on
> Please note, some improvements are not limited to batch sql. Maybe streaming 
> sql job could also benefits from some improvements in this JIRA.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25592) Improvement of parser, optimizer and execution for Flink Batch SQL

2022-01-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-25592:

Component/s: Table SQL / API
 Table SQL / Planner
 Table SQL / Runtime

> Improvement of parser, optimizer and execution for Flink Batch SQL
> --
>
> Key: FLINK-25592
> URL: https://issues.apache.org/jira/browse/FLINK-25592
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner, Table SQL / Runtime
>Reporter: Jing Zhang
>Priority: Minor
>
> This is a parent JIRA to track improvements on Flink Batch SQL, including 
> parser, optimizer and execution.
> For example,
> 1. using Hive dialect and default dialect, some sql query would be translated 
> into different plans
> 2. specify hash/sort aggregate strategy and hash/sort merge join strategy in 
> sql hint
> 3. take parquet metadata into consideration in optimization
> 4. and so on
> Please note, some improvements are not limited to batch sql. Maybe streaming 
> sql job could also benefits from some improvements in this JIRA.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25420) Port JDBC Source to new Source API (FLIP-27)

2022-01-10 Thread RocMarshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472489#comment-17472489
 ] 

RocMarshal commented on FLINK-25420:


[~MartijnVisser] IMO, It would be better if a new FLIP to do the corresponding 
discussion and design? 

> Port JDBC Source to new Source API (FLIP-27)
> 
>
> Key: FLINK-25420
> URL: https://issues.apache.org/jira/browse/FLINK-25420
> Project: Flink
>  Issue Type: Improvement
>Reporter: Martijn Visser
>Priority: Major
>
> The current JDBC connector is using the old SourceFunction interface, which 
> is going to be deprecated. We should port/refactor the JDBC Source to use the 
> new Source API, based on FLIP-27 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25584) Azure failed on install node and npm for Flink : Runtime web

2022-01-10 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472486#comment-17472486
 ] 

Yun Gao commented on FLINK-25584:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29206&view=logs&j=3e60b793-4158-5027-ac6d-4cdc51dffe1e&t=d5ed4970-7667-5f7e-2ece-62e410f74748&l=4466
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29206&view=logs&j=ed6509f5-1153-558c-557a-5ee0afbcdf24&t=241b1e5e-1a8e-5e6a-469a-a9b8cad87065&l=5018
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29206&view=logs&j=b1e44b80-6687-5cc5-6529-292f7212c609&t=e749141c-4fcf-5663-d533-e10832b9aaf9&l=4514

> Azure failed on install node and npm for Flink : Runtime web
> 
>
> Key: FLINK-25584
> URL: https://issues.apache.org/jira/browse/FLINK-25584
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Runtime / Web Frontend
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> [INFO] --- frontend-maven-plugin:1.11.0:install-node-and-npm (install node 
> and npm) @ flink-runtime-web ---
> [INFO] Installing node version v12.14.1
> [INFO] Downloading 
> https://nodejs.org/dist/v12.14.1/node-v12.14.1-linux-x64.tar.gz to 
> /__w/1/.m2/repository/com/github/eirslett/node/12.14.1/node-12.14.1-linux-x64.tar.gz
> [INFO] No proxies configured
> [INFO] No proxy was configured, downloading directly
> [INFO] 
> 
> [INFO] Reactor Summary:
> 
> [ERROR] Failed to execute goal 
> com.github.eirslett:frontend-maven-plugin:1.11.0:install-node-and-npm 
> (install node and npm) on project flink-runtime-web: Could not download 
> Node.js: Could not download 
> https://nodejs.org/dist/v12.14.1/node-v12.14.1-linux-x64.tar.gz: Remote host 
> terminated the handshake: SSL peer shut down incorrectly -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-runtime-web
> {code}
> (The refactor summary is omitted)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25602) make the BlobServer use aws s3

2022-01-10 Thread lupan (Jira)
lupan created FLINK-25602:
-

 Summary: make the BlobServer use aws s3
 Key: FLINK-25602
 URL: https://issues.apache.org/jira/browse/FLINK-25602
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Network
Affects Versions: 1.13.3
Reporter: lupan


Currently, blob.storage.directory does not support using aws s3 as 
storage。details as follows:

When I use the following configuration:
{code:java}
blob.storage.directory: s3://iceberg-bucket/flink/blob {code}
I get the following error:
{code:java}
taskmanager    | 2022-01-11 02:41:11,460 ERROR 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner      [] - Terminating 
TaskManagerRunner with exit code 1.
taskmanager    | org.apache.flink.util.FlinkException: Failed to start the 
TaskManagerRunner.
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:374)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerProcessSecurely$3(TaskManagerRunner.java:413)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerProcessSecurely(TaskManagerRunner.java:413)
 [flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerProcessSecurely(TaskManagerRunner.java:396)
 [flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:354)
 [flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    | Caused by: java.io.IOException: Could not create storage 
directory for BLOB store in 's3:/iceberg-bucket/flink/blob'.
taskmanager    |        at 
org.apache.flink.runtime.blob.BlobUtils.initLocalStorageDirectory(BlobUtils.java:139)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.blob.AbstractBlobCache.(AbstractBlobCache.java:89)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.blob.PermanentBlobCache.(PermanentBlobCache.java:93)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.blob.BlobCacheService.(BlobCacheService.java:55) 
~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.(TaskManagerRunner.java:169)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:367)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        ... 5 more{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25603) make the BlobServer use aws s3

2022-01-10 Thread lupan (Jira)
lupan created FLINK-25603:
-

 Summary: make the BlobServer use aws s3
 Key: FLINK-25603
 URL: https://issues.apache.org/jira/browse/FLINK-25603
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Network
Affects Versions: 1.13.3
Reporter: lupan


Currently, blob.storage.directory does not support using aws s3 as 
storage。details as follows:

When I use the following configuration:
{code:java}
blob.storage.directory: s3://iceberg-bucket/flink/blob {code}
I get the following error:
{code:java}
taskmanager    | 2022-01-11 02:41:11,460 ERROR 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner      [] - Terminating 
TaskManagerRunner with exit code 1.
taskmanager    | org.apache.flink.util.FlinkException: Failed to start the 
TaskManagerRunner.
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:374)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerProcessSecurely$3(TaskManagerRunner.java:413)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerProcessSecurely(TaskManagerRunner.java:413)
 [flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerProcessSecurely(TaskManagerRunner.java:396)
 [flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:354)
 [flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    | Caused by: java.io.IOException: Could not create storage 
directory for BLOB store in 's3:/iceberg-bucket/flink/blob'.
taskmanager    |        at 
org.apache.flink.runtime.blob.BlobUtils.initLocalStorageDirectory(BlobUtils.java:139)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.blob.AbstractBlobCache.(AbstractBlobCache.java:89)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.blob.PermanentBlobCache.(PermanentBlobCache.java:93)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.blob.BlobCacheService.(BlobCacheService.java:55) 
~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.(TaskManagerRunner.java:169)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:367)
 ~[flink-dist_2.12-1.13.3.jar:1.13.3]
taskmanager    |        ... 5 more{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   * add3e18c345cdb7f5268feaa130990b449d4ecc1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29221)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   * add3e18c345cdb7f5268feaa130990b449d4ecc1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] imaffe commented on a change in pull request #17937: [FLINK-25044][testing][Pulsar Connector] Add More Unit Test For Pulsar Source

2022-01-10 Thread GitBox


imaffe commented on a change in pull request #17937:
URL: https://github.com/apache/flink/pull/17937#discussion_r781787236



##
File path: 
flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/reader/split/PulsarPartitionSplitReaderTestBase.java
##
@@ -39,78 +50,164 @@
 import org.junit.jupiter.api.extension.TestTemplateInvocationContextProvider;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.stream.Stream;
 
 import static java.time.Duration.ofSeconds;
 import static java.util.Collections.singletonList;
 import static org.apache.commons.lang3.RandomStringUtils.randomAlphabetic;
+import static 
org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyAdmin;
+import static 
org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyThrow;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_ENABLE_AUTO_ACKNOWLEDGE_MESSAGE;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_MAX_FETCH_RECORDS;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_MAX_FETCH_TIME;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_SUBSCRIPTION_NAME;
-import static 
org.apache.flink.connector.pulsar.source.enumerator.cursor.StopCursor.never;
 import static 
org.apache.flink.connector.pulsar.source.enumerator.topic.TopicNameUtils.topicNameWithPartition;
 import static 
org.apache.flink.connector.pulsar.source.enumerator.topic.TopicRange.createFullRange;
+import static 
org.apache.flink.connector.pulsar.source.reader.deserializer.PulsarDeserializationSchema.flinkSchema;
+import static 
org.apache.flink.connector.pulsar.testutils.PulsarTestCommonUtils.isAssignableFromParameterContext;
+import static 
org.apache.flink.connector.pulsar.testutils.extension.PulsarSourceOrderlinessExtension.PULSAR_SOURCE_READER_SUBSCRIPTION_TYPE_STORE_KEY;
+import static 
org.apache.flink.connector.pulsar.testutils.extension.PulsarSourceOrderlinessExtension.PULSAR_TEST_RESOURCE_NAMESPACE;
+import static 
org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.DEFAULT_PARTITIONS;
+import static 
org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.NUM_RECORDS_PER_PARTITION;
 import static org.apache.flink.core.testutils.CommonTestUtils.waitUtil;
+import static 
org.apache.flink.shaded.guava30.com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 import static org.apache.pulsar.client.api.Schema.STRING;
-import static org.junit.jupiter.api.Assertions.assertNull;
+import static org.assertj.core.api.Assertions.assertThat;
 
 /** Test utils for split readers. */
+@ExtendWith({
+PulsarSourceOrderlinessExtension.class,
+TestLoggerExtension.class,
+})
 public abstract class PulsarPartitionSplitReaderTestBase extends 
PulsarTestSuiteBase {

Review comment:
   And this new fix should have covered the comments ~ Let me know if we 
need other fixes or I missed anything ~ Thank you so much @fapaul for valuable 
feedbacks ~




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17937: [FLINK-25044][testing][Pulsar Connector] Add More Unit Test For Pulsar Source

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #17937:
URL: https://github.com/apache/flink/pull/17937#issuecomment-981287170


   
   ## CI report:
   
   * 48c44601bb0c15a3975d8ad41249231f06cbced2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29073)
 
   * e45e95974b2d7229be6f04f51828d09b50a5486d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29220)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17601: [FLINK-24697][flink-connectors-kafka] add auto.offset.reset configuration for group-offsets startup mode

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #17601:
URL: https://github.com/apache/flink/pull/17601#issuecomment-954546978


   
   ## CI report:
   
   * f75205b826071098291c54e42f612d2954ed0f0f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29011)
 
   * 493bad4a9158fd2e5e83b6a0eb416cc96633be05 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29214)
 
   * 4ea3c9f8fd437a0e6d183bfb62101f54f89de34e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29219)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25209) SQLClientSchemaRegistryITCase#testReading is broken

2022-01-10 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472483#comment-17472483
 ] 

Jingsong Lee commented on FLINK-25209:
--

[~renqs] will take this.

CC: [~dwysakowicz]  

> SQLClientSchemaRegistryITCase#testReading is broken
> ---
>
> Key: FLINK-25209
> URL: https://issues.apache.org/jira/browse/FLINK-25209
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Table SQL / Client, Tests
>Affects Versions: 1.13.3
>Reporter: Chesnay Schepler
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.15.0
>
>
> https://dev.azure.com/chesnay/flink/_build/results?buildId=1880&view=logs&j=0e31ee24-31a6-528c-a4bf-45cde9b2a14e&t=ff03a8fa-e84e-5199-efb2-5433077ce8e2
> {code:java}
> Dec 06 11:33:16 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 236.417 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase
> Dec 06 11:33:16 [ERROR] 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading  
> Time elapsed: 152.789 s  <<< ERROR!
> Dec 06 11:33:16 java.io.IOException: Could not read expected number of 
> messages.
> Dec 06 11:33:16   at 
> org.apache.flink.tests.util.kafka.KafkaContainerClient.readMessages(KafkaContainerClient.java:115)
> Dec 06 11:33:16   at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading(SQLClientSchemaRegistryITCase.java:165)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25209) SQLClientSchemaRegistryITCase#testReading is broken

2022-01-10 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-25209:


Assignee: Qingsheng Ren  (was: Jingsong Lee)

> SQLClientSchemaRegistryITCase#testReading is broken
> ---
>
> Key: FLINK-25209
> URL: https://issues.apache.org/jira/browse/FLINK-25209
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Table SQL / Client, Tests
>Affects Versions: 1.13.3
>Reporter: Chesnay Schepler
>Assignee: Qingsheng Ren
>Priority: Blocker
> Fix For: 1.15.0
>
>
> https://dev.azure.com/chesnay/flink/_build/results?buildId=1880&view=logs&j=0e31ee24-31a6-528c-a4bf-45cde9b2a14e&t=ff03a8fa-e84e-5199-efb2-5433077ce8e2
> {code:java}
> Dec 06 11:33:16 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 236.417 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase
> Dec 06 11:33:16 [ERROR] 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading  
> Time elapsed: 152.789 s  <<< ERROR!
> Dec 06 11:33:16 java.io.IOException: Could not read expected number of 
> messages.
> Dec 06 11:33:16   at 
> org.apache.flink.tests.util.kafka.KafkaContainerClient.readMessages(KafkaContainerClient.java:115)
> Dec 06 11:33:16   at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading(SQLClientSchemaRegistryITCase.java:165)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17937: [FLINK-25044][testing][Pulsar Connector] Add More Unit Test For Pulsar Source

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #17937:
URL: https://github.com/apache/flink/pull/17937#issuecomment-981287170


   
   ## CI report:
   
   * 48c44601bb0c15a3975d8ad41249231f06cbced2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29073)
 
   * e45e95974b2d7229be6f04f51828d09b50a5486d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17601: [FLINK-24697][flink-connectors-kafka] add auto.offset.reset configuration for group-offsets startup mode

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #17601:
URL: https://github.com/apache/flink/pull/17601#issuecomment-954546978


   
   ## CI report:
   
   * f75205b826071098291c54e42f612d2954ed0f0f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29011)
 
   * 493bad4a9158fd2e5e83b6a0eb416cc96633be05 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29214)
 
   * 4ea3c9f8fd437a0e6d183bfb62101f54f89de34e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] imaffe commented on a change in pull request #17937: [FLINK-25044][testing][Pulsar Connector] Add More Unit Test For Pulsar Source

2022-01-10 Thread GitBox


imaffe commented on a change in pull request #17937:
URL: https://github.com/apache/flink/pull/17937#discussion_r781784568



##
File path: 
flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/reader/split/PulsarPartitionSplitReaderTestBase.java
##
@@ -39,78 +50,164 @@
 import org.junit.jupiter.api.extension.TestTemplateInvocationContextProvider;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.stream.Stream;
 
 import static java.time.Duration.ofSeconds;
 import static java.util.Collections.singletonList;
 import static org.apache.commons.lang3.RandomStringUtils.randomAlphabetic;
+import static 
org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyAdmin;
+import static 
org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyThrow;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_ENABLE_AUTO_ACKNOWLEDGE_MESSAGE;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_MAX_FETCH_RECORDS;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_MAX_FETCH_TIME;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_SUBSCRIPTION_NAME;
-import static 
org.apache.flink.connector.pulsar.source.enumerator.cursor.StopCursor.never;
 import static 
org.apache.flink.connector.pulsar.source.enumerator.topic.TopicNameUtils.topicNameWithPartition;
 import static 
org.apache.flink.connector.pulsar.source.enumerator.topic.TopicRange.createFullRange;
+import static 
org.apache.flink.connector.pulsar.source.reader.deserializer.PulsarDeserializationSchema.flinkSchema;
+import static 
org.apache.flink.connector.pulsar.testutils.PulsarTestCommonUtils.isAssignableFromParameterContext;
+import static 
org.apache.flink.connector.pulsar.testutils.extension.PulsarSourceOrderlinessExtension.PULSAR_SOURCE_READER_SUBSCRIPTION_TYPE_STORE_KEY;
+import static 
org.apache.flink.connector.pulsar.testutils.extension.PulsarSourceOrderlinessExtension.PULSAR_TEST_RESOURCE_NAMESPACE;
+import static 
org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.DEFAULT_PARTITIONS;
+import static 
org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.NUM_RECORDS_PER_PARTITION;
 import static org.apache.flink.core.testutils.CommonTestUtils.waitUtil;
+import static 
org.apache.flink.shaded.guava30.com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 import static org.apache.pulsar.client.api.Schema.STRING;
-import static org.junit.jupiter.api.Assertions.assertNull;
+import static org.assertj.core.api.Assertions.assertThat;
 
 /** Test utils for split readers. */
+@ExtendWith({
+PulsarSourceOrderlinessExtension.class,
+TestLoggerExtension.class,
+})
 public abstract class PulsarPartitionSplitReaderTestBase extends 
PulsarTestSuiteBase {

Review comment:
   For composition over inheritance, I totally understand the design 
considerations, but I kinda feel currently using composition here is a little 
bit overkill from readability perspective. I think once the testing methods 
continues to grow (if needed) we can consider using a class to place those 
methods later. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   * add3e18c345cdb7f5268feaa130990b449d4ecc1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] imaffe commented on a change in pull request #17937: [FLINK-25044][testing][Pulsar Connector] Add More Unit Test For Pulsar Source

2022-01-10 Thread GitBox


imaffe commented on a change in pull request #17937:
URL: https://github.com/apache/flink/pull/17937#discussion_r781783619



##
File path: 
flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/reader/split/PulsarPartitionSplitReaderTestBase.java
##
@@ -39,78 +50,164 @@
 import org.junit.jupiter.api.extension.TestTemplateInvocationContextProvider;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.stream.Stream;
 
 import static java.time.Duration.ofSeconds;
 import static java.util.Collections.singletonList;
 import static org.apache.commons.lang3.RandomStringUtils.randomAlphabetic;
+import static 
org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyAdmin;
+import static 
org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyThrow;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_ENABLE_AUTO_ACKNOWLEDGE_MESSAGE;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_MAX_FETCH_RECORDS;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_MAX_FETCH_TIME;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_SUBSCRIPTION_NAME;
-import static 
org.apache.flink.connector.pulsar.source.enumerator.cursor.StopCursor.never;
 import static 
org.apache.flink.connector.pulsar.source.enumerator.topic.TopicNameUtils.topicNameWithPartition;
 import static 
org.apache.flink.connector.pulsar.source.enumerator.topic.TopicRange.createFullRange;
+import static 
org.apache.flink.connector.pulsar.source.reader.deserializer.PulsarDeserializationSchema.flinkSchema;
+import static 
org.apache.flink.connector.pulsar.testutils.PulsarTestCommonUtils.isAssignableFromParameterContext;
+import static 
org.apache.flink.connector.pulsar.testutils.extension.PulsarSourceOrderlinessExtension.PULSAR_SOURCE_READER_SUBSCRIPTION_TYPE_STORE_KEY;
+import static 
org.apache.flink.connector.pulsar.testutils.extension.PulsarSourceOrderlinessExtension.PULSAR_TEST_RESOURCE_NAMESPACE;
+import static 
org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.DEFAULT_PARTITIONS;
+import static 
org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.NUM_RECORDS_PER_PARTITION;
 import static org.apache.flink.core.testutils.CommonTestUtils.waitUtil;
+import static 
org.apache.flink.shaded.guava30.com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 import static org.apache.pulsar.client.api.Schema.STRING;
-import static org.junit.jupiter.api.Assertions.assertNull;
+import static org.assertj.core.api.Assertions.assertThat;
 
 /** Test utils for split readers. */
+@ExtendWith({
+PulsarSourceOrderlinessExtension.class,
+TestLoggerExtension.class,
+})
 public abstract class PulsarPartitionSplitReaderTestBase extends 
PulsarTestSuiteBase {

Review comment:
   I changed private and moved methods to subclasses ~




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] imaffe commented on a change in pull request #17937: [FLINK-25044][testing][Pulsar Connector] Add More Unit Test For Pulsar Source

2022-01-10 Thread GitBox


imaffe commented on a change in pull request #17937:
URL: https://github.com/apache/flink/pull/17937#discussion_r781783619



##
File path: 
flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/reader/split/PulsarPartitionSplitReaderTestBase.java
##
@@ -39,78 +50,164 @@
 import org.junit.jupiter.api.extension.TestTemplateInvocationContextProvider;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.stream.Stream;
 
 import static java.time.Duration.ofSeconds;
 import static java.util.Collections.singletonList;
 import static org.apache.commons.lang3.RandomStringUtils.randomAlphabetic;
+import static 
org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyAdmin;
+import static 
org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyThrow;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_ENABLE_AUTO_ACKNOWLEDGE_MESSAGE;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_MAX_FETCH_RECORDS;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_MAX_FETCH_TIME;
 import static 
org.apache.flink.connector.pulsar.source.PulsarSourceOptions.PULSAR_SUBSCRIPTION_NAME;
-import static 
org.apache.flink.connector.pulsar.source.enumerator.cursor.StopCursor.never;
 import static 
org.apache.flink.connector.pulsar.source.enumerator.topic.TopicNameUtils.topicNameWithPartition;
 import static 
org.apache.flink.connector.pulsar.source.enumerator.topic.TopicRange.createFullRange;
+import static 
org.apache.flink.connector.pulsar.source.reader.deserializer.PulsarDeserializationSchema.flinkSchema;
+import static 
org.apache.flink.connector.pulsar.testutils.PulsarTestCommonUtils.isAssignableFromParameterContext;
+import static 
org.apache.flink.connector.pulsar.testutils.extension.PulsarSourceOrderlinessExtension.PULSAR_SOURCE_READER_SUBSCRIPTION_TYPE_STORE_KEY;
+import static 
org.apache.flink.connector.pulsar.testutils.extension.PulsarSourceOrderlinessExtension.PULSAR_TEST_RESOURCE_NAMESPACE;
+import static 
org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.DEFAULT_PARTITIONS;
+import static 
org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.NUM_RECORDS_PER_PARTITION;
 import static org.apache.flink.core.testutils.CommonTestUtils.waitUtil;
+import static 
org.apache.flink.shaded.guava30.com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 import static org.apache.pulsar.client.api.Schema.STRING;
-import static org.junit.jupiter.api.Assertions.assertNull;
+import static org.assertj.core.api.Assertions.assertThat;
 
 /** Test utils for split readers. */
+@ExtendWith({
+PulsarSourceOrderlinessExtension.class,
+TestLoggerExtension.class,
+})
 public abstract class PulsarPartitionSplitReaderTestBase extends 
PulsarTestSuiteBase {

Review comment:
   I used private and moved methods to subclasses. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   * add3e18c345cdb7f5268feaa130990b449d4ecc1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25209) SQLClientSchemaRegistryITCase#testReading is broken

2022-01-10 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472476#comment-17472476
 ] 

Jingsong Lee commented on FLINK-25209:
--

[~renqs] What do you think? Should we cherry-pick these fixes from FLINK-22198 
into this one?

> SQLClientSchemaRegistryITCase#testReading is broken
> ---
>
> Key: FLINK-25209
> URL: https://issues.apache.org/jira/browse/FLINK-25209
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Table SQL / Client, Tests
>Affects Versions: 1.13.3
>Reporter: Chesnay Schepler
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.15.0
>
>
> https://dev.azure.com/chesnay/flink/_build/results?buildId=1880&view=logs&j=0e31ee24-31a6-528c-a4bf-45cde9b2a14e&t=ff03a8fa-e84e-5199-efb2-5433077ce8e2
> {code:java}
> Dec 06 11:33:16 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 236.417 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase
> Dec 06 11:33:16 [ERROR] 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading  
> Time elapsed: 152.789 s  <<< ERROR!
> Dec 06 11:33:16 java.io.IOException: Could not read expected number of 
> messages.
> Dec 06 11:33:16   at 
> org.apache.flink.tests.util.kafka.KafkaContainerClient.readMessages(KafkaContainerClient.java:115)
> Dec 06 11:33:16   at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading(SQLClientSchemaRegistryITCase.java:165)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] wenlong88 commented on pull request #18215: [FLINK-25392][table-planner]Support new StatementSet syntax in planner and parser

2022-01-10 Thread GitBox


wenlong88 commented on pull request #18215:
URL: https://github.com/apache/flink/pull/18215#issuecomment-1009625701


   hi, @twalthr  thanks for the review, I have update the pr addressing your 
comment and created FLINK-25600 as a following up subtask. It is almost done in 
my local repo actually, and I will open the pr after this pr is approved.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25209) SQLClientSchemaRegistryITCase#testReading is broken

2022-01-10 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472475#comment-17472475
 ] 

Jingsong Lee commented on FLINK-25209:
--

Hi [~trohrmann]  I think this may be related to FLINK-22198 This also uses 
{{KafkaContainer}}

> SQLClientSchemaRegistryITCase#testReading is broken
> ---
>
> Key: FLINK-25209
> URL: https://issues.apache.org/jira/browse/FLINK-25209
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Table SQL / Client, Tests
>Affects Versions: 1.13.3
>Reporter: Chesnay Schepler
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.15.0
>
>
> https://dev.azure.com/chesnay/flink/_build/results?buildId=1880&view=logs&j=0e31ee24-31a6-528c-a4bf-45cde9b2a14e&t=ff03a8fa-e84e-5199-efb2-5433077ce8e2
> {code:java}
> Dec 06 11:33:16 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 236.417 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase
> Dec 06 11:33:16 [ERROR] 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading  
> Time elapsed: 152.789 s  <<< ERROR!
> Dec 06 11:33:16 java.io.IOException: Could not read expected number of 
> messages.
> Dec 06 11:33:16   at 
> org.apache.flink.tests.util.kafka.KafkaContainerClient.readMessages(KafkaContainerClient.java:115)
> Dec 06 11:33:16   at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading(SQLClientSchemaRegistryITCase.java:165)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18320: [FLINK-25427] Decreased number of tests

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18320:
URL: https://github.com/apache/flink/pull/18320#issuecomment-1008933892


   
   ## CI report:
   
   * 621ef761cb61fb61f15b8d6a80cd7add7fd2d648 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29201)
 
   * cff02a6f62345d451348c8a9646850126f88d3b5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29218)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18320: [FLINK-25427] Decreased number of tests

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18320:
URL: https://github.com/apache/flink/pull/18320#issuecomment-1008933892


   
   ## CI report:
   
   * 621ef761cb61fb61f15b8d6a80cd7add7fd2d648 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29201)
 
   * cff02a6f62345d451348c8a9646850126f88d3b5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   * add3e18c345cdb7f5268feaa130990b449d4ecc1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wenlong88 commented on a change in pull request #18215: [FLINK-25392][table-planner]Support new StatementSet syntax in planner and parser

2022-01-10 Thread GitBox


wenlong88 commented on a change in pull request #18215:
URL: https://github.com/apache/flink/pull/18215#discussion_r781777506



##
File path: 
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/nodes/exec/stream/MatchRecognizeJsonPlanTest_jsonplan/testMatch.out
##
@@ -427,7 +463,7 @@
 "cid" : "BIGINT"
   } ]
 },
-"description" : "Match(orderBy=[proctime ASC], measures=[FINAL(A\".id) AS 
aid, FINAL(l.id) AS bid, FINAL(C.id) AS cid], rowsPerMatch=[ONE ROW PER MATCH], 
after=[SKIP TO NEXT ROW], pattern=[((_UTF-16LE'A\"', _UTF-16LE'l'), 
_UTF-16LE'C')], define=[{A\"==(LAST(*.$1, 0), _UTF-16LE'a'), l==(LAST(*.$1, 0), 
_UTF-16LE'b'), C==(LAST(*.$1, 0), _UTF-16LE'c')}])"
+"description" : "Match(orderBy=[proctime ASC], 
measures=[FINAL(FINAL(A\".id)) AS aid, FINAL(FINAL(l.id)) AS bid, 
FINAL(FINAL(C.id)) AS cid], rowsPerMatch=[ONE ROW PER MATCH], after=[SKIP TO 
NEXT ROW], pattern=[((_UTF-16LE'A\"', _UTF-16LE'l'), _UTF-16LE'C')], 
define=[{A\"==(LAST(*.$1, 0), _UTF-16LE'a'), l==(LAST(*.$1, 0), _UTF-16LE'b'), 
C==(LAST(*.$1, 0), _UTF-16LE'c')}])"

Review comment:
   It is initially because we add validation on the query of insert which 
is ignored before. 
   I finally found out that the root cause and fixed in the latest commit: when 
converting sql node to operation, the query of insert would be validated twice 
and the validator (in SqlValidatorImpl#navigationInMeasure) rewrite the FINAL 
call twice.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   * add3e18c345cdb7f5268feaa130990b449d4ecc1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   * add3e18c345cdb7f5268feaa130990b449d4ecc1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25589) Update Chinese version of Elasticsearch connector docs

2022-01-10 Thread Yuhao Bi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhao Bi updated FLINK-25589:
-
Attachment: en_datastream_elasticsearch.png
en_table_elasticsearch.png
zh_datastream_elasticsearch.png
zh_table_elasticsearch.png

> Update Chinese version of Elasticsearch connector docs
> --
>
> Key: FLINK-25589
> URL: https://issues.apache.org/jira/browse/FLINK-25589
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Connectors / ElasticSearch, 
> Documentation
>Reporter: Alexander Preuss
>Assignee: Yuhao Bi
>Priority: Major
>  Labels: pull-request-available
> Attachments: en_datastream_elasticsearch.png, 
> en_table_elasticsearch.png, zh_datastream_elasticsearch.png, 
> zh_table_elasticsearch.png
>
>
> In FLINK-24326 we updated the documentation with the new Elasticsearch sink 
> interface. The Chinese version still has to be updated as well.
> The affected pages are: 
> docs/content/docs/connectors/datastream/elasticsearch.md
> docs/content/docs/connectors/table/elasticsearch.md
> English doc PR for reference:
> https://github.com/apache/flink/pull/17930



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   * add3e18c345cdb7f5268feaa130990b449d4ecc1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zjureel commented on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


zjureel commented on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1009612368


   Thanks @KarmaGYZ , I have done it according to your advice and updated this 
PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29217)
 
   * add3e18c345cdb7f5268feaa130990b449d4ecc1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * be139b8d4fbc8be2c6129e15d7cb77bcf58f66f9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29190)
 
   * fe76d7d6e608811708815b939b83b5daff9049d1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18322: [FLINK-25589][docs][connectors] Update Chinese version of Elasticsearch connector docs

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18322:
URL: https://github.com/apache/flink/pull/18322#issuecomment-1009605866


   
   ## CI report:
   
   * 310b62a595a1af2d091b6a0f2e8fc137460b92e7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29216)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25374) Azure pipeline get stalled on scanning project

2022-01-10 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17472459#comment-17472459
 ] 

Martijn Visser commented on FLINK-25374:


[~trohrmann] The Alibaba CIs have stabilized a lot, this mention from 
[~gaoyunhaii] is actually the first one with a hick-up. What makes it really 
hard to debug is that we haven't had a stable master branch in a really long 
time, so we don't know if the issues now are caused because of a networking/VM 
problem or by unstable code. 

> Azure pipeline get stalled on scanning project
> --
>
> Key: FLINK-25374
> URL: https://issues.apache.org/jira/browse/FLINK-25374
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.15.0, 1.13.5, 1.14.2
>Reporter: Yun Gao
>Assignee: Martijn Visser
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> 2021-12-18T02:01:01.8980373Z Dec 18 02:01:01 RUNNING 'run_mvn 
> -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2 -Dfast -Pskip-webui-build 
> -Dlog.dir=/__w/_temp/debug_files 
> -Dlog4j.configurationFile=file:///__w/2/s/tools/ci/log4j.properties 
> -DskipTests -Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.11  
> install'.
> 2021-12-18T02:01:01.885Z Dec 18 02:01:01 Invoking mvn with 'mvn 
> -Dmaven.wagon.http.pool=false -Dorg.slf4j.simpleLogger.showDateTime=true 
> -Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS 
> -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn
>  --no-snapshot-updates -B -Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
> -Dscala-2.11  --settings /__w/2/s/tools/ci/alibaba-mirror-settings.xml  
> -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2 -Dfast -Pskip-webui-build 
> -Dlog.dir=/__w/_temp/debug_files 
> -Dlog4j.configurationFile=file:///__w/2/s/tools/ci/log4j.properties 
> -DskipTests -Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.11 install'
> 2021-12-18T02:01:01.9407169Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2021-12-18T02:01:02.8291019Z Dec 18 02:01:02 [INFO] Scanning for projects...
> 2021-12-18T02:16:02.4676481Z Dec 18 02:16:02 
> ==
> 2021-12-18T02:16:02.4679732Z Dec 18 02:16:02 Process produced no output for 
> 900 seconds.
> 2021-12-18T02:16:02.4680416Z Dec 18 02:16:02 
> ==
> 2021-12-18T02:16:02.4681062Z Dec 18 02:16:02 
> ==
> 2021-12-18T02:16:02.4681601Z Dec 18 02:16:02 The following Java processes are 
> running (JPS)
> 2021-12-18T02:16:02.4682191Z Dec 18 02:16:02 
> ==
> 2021-12-18T02:16:02.4743659Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2021-12-18T02:16:03.1019936Z Dec 18 02:16:03 354 Launcher
> 2021-12-18T02:16:03.1020514Z Dec 18 02:16:03 857 Jps
> 2021-12-18T02:16:03.1052014Z Dec 18 02:16:03 
> ==
> 2021-12-18T02:16:03.1052803Z Dec 18 02:16:03 Printing stack trace of Java 
> process 354
> 2021-12-18T02:16:03.1053382Z Dec 18 02:16:03 
> ==
> 2021-12-18T02:16:03.1123385Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2021-12-18T02:16:03.4416679Z Dec 18 02:16:03 2021-12-18 02:16:03
> 2021-12-18T02:16:03.4417639Z Dec 18 02:16:03 Full thread dump OpenJDK 64-Bit 
> Server VM (25.292-b10 mixed mode):
> 2021-12-18T02:16:03.4418277Z Dec 18 02:16:03 
> 2021-12-18T02:16:03.4419452Z Dec 18 02:16:03 "Attach Listener" #22 daemon 
> prio=9 os_prio=0 tid=0x7fbc9c001000 nid=0x3b8 waiting on condition 
> [0x]
> 2021-12-18T02:16:03.4420652Z Dec 18 02:16:03java.lang.Thread.State: 
> RUNNABLE
> 2021-12-18T02:16:03.4421479Z Dec 18 02:16:03 
> 2021-12-18T02:16:03.4422239Z Dec 18 02:16:03 "Service Thread" #20 daemon 
> prio=9 os_prio=0 tid=0x7fbd4810c800 nid=0x196 runnable 
> [0x]
> 2021-12-18T02:16:03.4422936Z Dec 18 02:16:03java.lang.Thread.State: 
> RUNNABLE
> 2021-12-18T02:16:03.4423280Z Dec 18 02:16:03 
> 2021-12-18T02:16:03.4423900Z Dec 18 02:16:03 "C1 CompilerThread14" #19 daemon 
> prio=9 os_prio=0 tid=0x7fbd48109800 nid=0x195 waiting on condition 
> [0x]
> 2021-12-18T02:16:03.4424648Z Dec 18 02:16:03java.lang.Thread.State: 
> RUNNABLE
> 2021-12-18T02:16:03.4425089Z Dec 18 02:16:03 
> 2021-12-18T02:16:03.4425734Z Dec 18 02:16:03 "C1 CompilerThread13" #18 daemon 
> prio=9 os_prio=0 tid=0x7fbd4810780

[GitHub] [flink] flinkbot commented on pull request #18322: [FLINK-25589][docs][connectors] Update Chinese version of Elasticsearch connector docs

2022-01-10 Thread GitBox


flinkbot commented on pull request #18322:
URL: https://github.com/apache/flink/pull/18322#issuecomment-1009605866


   
   ## CI report:
   
   * 310b62a595a1af2d091b6a0f2e8fc137460b92e7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18322: [FLINK-25589][docs][connectors] Update Chinese version of Elasticsearch connector docs

2022-01-10 Thread GitBox


flinkbot commented on pull request #18322:
URL: https://github.com/apache/flink/pull/18322#issuecomment-1009604792


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 310b62a595a1af2d091b6a0f2e8fc137460b92e7 (Tue Jan 11 
05:10:17 UTC 2022)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25589) Update Chinese version of Elasticsearch connector docs

2022-01-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25589:
---
Labels: pull-request-available  (was: )

> Update Chinese version of Elasticsearch connector docs
> --
>
> Key: FLINK-25589
> URL: https://issues.apache.org/jira/browse/FLINK-25589
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Connectors / ElasticSearch, 
> Documentation
>Reporter: Alexander Preuss
>Assignee: Yuhao Bi
>Priority: Major
>  Labels: pull-request-available
>
> In FLINK-24326 we updated the documentation with the new Elasticsearch sink 
> interface. The Chinese version still has to be updated as well.
> The affected pages are: 
> docs/content/docs/connectors/datastream/elasticsearch.md
> docs/content/docs/connectors/table/elasticsearch.md
> English doc PR for reference:
> https://github.com/apache/flink/pull/17930



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] biyuhao opened a new pull request #18322: [FLINK-25589][docs][connectors] Update Chinese version of Elasticsearch connector docs

2022-01-10 Thread GitBox


biyuhao opened a new pull request #18322:
URL: https://github.com/apache/flink/pull/18322


   
   
   ## What is the purpose of the change
   
   Update Chinese version document of Elasticsearch sink interface related to 
[FLINK-24326](https://issues.apache.org/jira/browse/FLINK-24326) (#17930)
   
   
   ## Brief change log
   
 - Translate `docs/content.zh/docs/connectors/datastream/elasticsearch.md`
 - Translate `docs/content.zh/docs/connectors/table/elasticsearch.md`
 - Trivial fix of `docs/content/docs/connectors/datastream/elasticsearch.md`
   
   
   ## Verifying this change
   
   This is a document update.
   Please build docs locally to verify document content and syntax.
   ```
   cd docs
   ./build_docs.sh -p
   ```
   Then you can preview `http://localhost:1313` in the local browser.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18037: [FLINK-25179] Add document for composite types support for parquet

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18037:
URL: https://github.com/apache/flink/pull/18037#issuecomment-987689691


   
   ## CI report:
   
   * 057e4a5a2b14921bea57d27b86fd9bc83b1df5ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29209)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-14998) Remove FileUtils#deletePathIfEmpty

2022-01-10 Thread Thomas Weise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated FLINK-14998:
-
Fix Version/s: (was: 1.14.3)

> Remove FileUtils#deletePathIfEmpty
> --
>
> Key: FLINK-14998
> URL: https://issues.apache.org/jira/browse/FLINK-14998
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Reporter: Yun Tang
>Priority: Minor
>  Labels: auto-deprioritized-major, starter
> Fix For: 1.15.0
>
>
> With the lesson learned from FLINK-7266, and the refactor of FLINK-8540, 
> method of  {{FileUtils#deletePathIfEmpty}} has been totally useless in Flink 
> production code. From my point of view, it's not wise to provide a method 
> with already known high-risk defect in Flink official code. I suggest to 
> remove this part of code.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13910) Many serializable classes have no explicit 'serialVersionUID'

2022-01-10 Thread Thomas Weise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated FLINK-13910:
-
Fix Version/s: (was: 1.14.3)

> Many serializable classes have no explicit 'serialVersionUID'
> -
>
> Key: FLINK-13910
> URL: https://issues.apache.org/jira/browse/FLINK-13910
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Reporter: Yun Tang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.15.0
>
> Attachments: SerializableNoSerialVersionUIDField, 
> classes-without-uid-per-module, serializable-classes-without-uid-5249249
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, many serializable classes in Flink have no explicit 
> 'serialVersionUID'. As [official 
> doc|https://flink.apache.org/contributing/code-style-and-quality-java.html#java-serialization]
>  said, {{Serializable classes must define a Serial Version UID}}. 
> No 'serialVersionUID' would cause compatibility problem. Take 
> {{TwoPhaseCommitSinkFunction}} for example, since no explicit 
> 'serialVersionUID' defined, after 
> [FLINK-10455|https://github.com/apache/flink/commit/489be82a6d93057ed4a3f9bf38ef50d01d11d96b]
>  introduced, its default 'serialVersionUID' has changed from 
> "4584405056408828651" to "4064406918549730832". In other words, if we submit 
> a job from Flink-1.6.3 local home to remote Flink-1.6.2 cluster with the 
> usage of {{TwoPhaseCommitSinkFunction}}, we would get exception like:
> {code:java}
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot 
> instantiate user function.
> at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperator(StreamConfig.java:239)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:104)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:267)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.InvalidClassException: 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction; 
> local class incompatible: stream classdesc serialVersionUID = 
> 4584405056408828651, local class serialVersionUID = 4064406918549730832
> at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:537)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:524)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:512)
> at 
> org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:473)
> at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperator(StreamConfig.java:224)
> ... 4 more
> {code}
> Similar problems existed in  
> {{org.apache.flink.streaming.api.operators.SimpleOperatorFactory}} which has 
> different 'serialVersionUID' from release-1.9 and current master branch.
> IMO, we might have two options to fix this bug:
> # Add explicit serialVersionUID for those classes which is identical to 
> latest Flink-1.9.0 release code.
> # Use similar mechanism like {{FailureTolerantObjectInputStream}} in 
> {{InstantiationUtil}} to ignore serialVersionUID mismatch.
> I have collected all production classes without serialVersionUID from latest 
> master branch in the attachment, which counts to 639 classes.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13400) Remove Hive and Hadoop dependencies from SQL Client

2022-01-10 Thread Thomas Weise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated FLINK-13400:
-
Fix Version/s: (was: 1.14.3)

> Remove Hive and Hadoop dependencies from SQL Client
> ---
>
> Key: FLINK-13400
> URL: https://issues.apache.org/jira/browse/FLINK-13400
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Timo Walther
>Assignee: frank wang
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> pull-request-available, stale-assigned
> Fix For: 1.15.0
>
>
> 340/550 lines in the SQL Client {{pom.xml}} are just around Hive and Hadoop 
> dependencies.  Hive has nothing to do with the SQL Client and it will be hard 
> to maintain the long list of  exclusion there. Some dependencies are even in 
> a {{provided}} scope and not {{test}} scope.
> We should remove all dependencies on Hive/Hadoop and replace catalog-related 
> tests by a testing catalog. Similar to how we tests source/sinks.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18215: [FLINK-25392][table-planner]Support new StatementSet syntax in planner and parser

2022-01-10 Thread GitBox


flinkbot edited a comment on pull request #18215:
URL: https://github.com/apache/flink/pull/18215#issuecomment-1001883080


   
   ## CI report:
   
   * 5a46cd66a5392fa16690cd23a8978c5e3de3b380 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28995)
 
   * d5dfa7d9c3bc5607e96b9ccfbeb6ce1a00bc13ca Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29215)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   >