[GitHub] [flink] AHeise commented on a change in pull request #14348: [FLINK-20433][tests] Stabilizing UnalignedCheckpointITCase.

2020-12-14 Thread GitBox


AHeise commented on a change in pull request #14348:
URL: https://github.com/apache/flink/pull/14348#discussion_r543123013



##
File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/UnalignedCheckpointTestBase.java
##
@@ -103,11 +101,6 @@
@Rule
public final TemporaryFolder temp = new TemporaryFolder();
 
-   @Rule
-   public final Timeout timeout = Timeout.builder()
-   .withTimeout(300, TimeUnit.SECONDS)
-   .build();

Review comment:
   Yes, that was the reason why I added it initially. But we then have a 
timeout rule which prints the stack traces (just what JUnit5 does).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14367: [FLINK-20601][docs] Rework PyFlink CLI documentation.

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14367:
URL: https://github.com/apache/flink/pull/14367#issuecomment-743040989


   
   ## CI report:
   
   * 8825b36cf590e53bf15837534ad4797594031ed5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10838)
 
   * d1f984ee907996303c30a9fba85ee177cc688855 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20601) Rework PyFlink CLI documentation

2020-12-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20601:
---
Labels: pull-request-available  (was: )

> Rework PyFlink CLI documentation
> 
>
> Key: FLINK-20601
> URL: https://issues.apache.org/jira/browse/FLINK-20601
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Documentation
>Reporter: Matthias
>Assignee: Shuiqiang Chen
>Priority: Major
>  Labels: pull-request-available
>
> The CLI PyFlink section needs to be refactored as well. This issue covers 
> this work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] shuiqiangchen commented on pull request #14367: [FLINK-20601][docs] Rework PyFlink CLI documentation.

2020-12-14 Thread GitBox


shuiqiangchen commented on pull request #14367:
URL: https://github.com/apache/flink/pull/14367#issuecomment-745104687


   Hi @XComp, Sorry for the late reply. Thank you for creating the JIRA, I have 
renamed the commit message and PR title accordingly. Please have a look.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14380:
URL: https://github.com/apache/flink/pull/14380#issuecomment-73572


   
   ## CI report:
   
   * 4741d372abe2f11a1639c70a6937a2421a41c41e Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10874)
 
   * edf39a787c1852c1b241dc2748c8e86069b8e721 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10875)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20540) The baseurl for pg database is incorrect in JdbcCatalog page

2020-12-14 Thread zhangzhao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249514#comment-17249514
 ] 

zhangzhao commented on FLINK-20540:
---

[~Leonard Xu], Can Merge the PR?  This is PR [address. 
|https://github.com/apache/flink/pull/14362]

> The baseurl for pg database is incorrect in JdbcCatalog page
> 
>
> Key: FLINK-20540
> URL: https://issues.apache.org/jira/browse/FLINK-20540
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Documentation, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.1
>Reporter: zhangzhao
>Assignee: zhangzhao
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>  
> {code:java}
> //代码占位符
> import org.apache.flink.connector.jdbc.catalog.JdbcCatalog
> new JdbcCatalog(name, defaultDatabase, username, password, baseUrl){code}
>  
> The baseUrl must be endswith / when instantiate JdbcCatalog.
> But according to [Flink 
> document|https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/jdbc.html#usage-of-postgrescatalog]
>  and code comments, baseUrl should be support  format 
> {{"jdbc:postgresql://:"}}
>  
> When i use baseUrl "{{jdbc:postgresql://:}}", the error stack is:
> {code:java}
> //代码占位符
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)\\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute application.
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\\n\\t...
>  7 more\\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\\n\\t...
>  7 more\\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed connecting to 
> jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\\n\\t...
>  10 more\\nCaused by: org.apache.flink.table.api.ValidationException: Failed 
> connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog.open(AbstractJdbcCatalog.java:100)
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:191)
> org.apache.flink.table.api.internal.TableEnvImpl.registerCatalog(TableEnvImpl.scala:267)
> com.upai.jobs.TableBodySentFields.registerCatalog(TableBodySentFields.scala:25)
> com.upai.jobs.FusionGifShow$.run(FusionGifShow.scala:28)
> com.upai.jobs.FlinkTask$.delayedEndpoint$com$upai$jobs$FlinkTask$1(FlinkTask.scala:41)
> com.upai.jobs.FlinkTask$delayedInit$body.apply(FlinkTask.scala:11)
> scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
> 

[jira] [Closed] (FLINK-20354) Rework standalone deployment documentation page

2020-12-14 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-20354.
--
Fix Version/s: 1.12.1
   Resolution: Fixed

Merged on master in 
https://github.com/apache/flink/commit/4ce5d03c76f25e845c1c538295d7c3e52e8f7153
Merged on release-1.12 in 
https://github.com/apache/flink/commit/6351fbb4bb731238a56d8823549bd48bf061fac3

> Rework standalone deployment documentation page
> ---
>
> Key: FLINK-20354
> URL: https://issues.apache.org/jira/browse/FLINK-20354
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Till Rohrmann
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.12.1
>
>
> Similar to FLINK-20347 we need to update the standalone deployment 
> documentation page. Additionally, we need to verify that everything we state 
> on the documentation works.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger closed pull request #14346: [FLINK-20354] Rework standalone docs pages

2020-12-14 Thread GitBox


rmetzger closed pull request #14346:
URL: https://github.com/apache/flink/pull/14346


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #14346: [FLINK-20354] Rework standalone docs pages

2020-12-14 Thread GitBox


rmetzger commented on pull request #14346:
URL: https://github.com/apache/flink/pull/14346#issuecomment-745080117


   Thanks a lot for the extensive review. Merging this now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] V1ncentzzZ commented on pull request #14370: [FLINK-20582][docs] Fix typos in `CREATE Statements` docs.

2020-12-14 Thread GitBox


V1ncentzzZ commented on pull request #14370:
URL: https://github.com/apache/flink/pull/14370#issuecomment-745075273


   cc @wuchong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20607) a wrong example in udfs page.

2020-12-14 Thread shizhengchao (Jira)
shizhengchao created FLINK-20607:


 Summary: a wrong example in udfs page.
 Key: FLINK-20607
 URL: https://issues.apache.org/jira/browse/FLINK-20607
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.11.1, 1.12.0
Reporter: shizhengchao


Demonstration error of multiple input types in FunctionHint: 
{code:java}
@FunctionHint(
  input = [@DataTypeHint("INT"), @DataTypeHint("INT")],
  output = @DataTypeHint("INT")
)
{code}

should be 
{code:java}
@FunctionHint(
  input = {@DataTypeHint("INT"), @DataTypeHint("INT")},
  output = @DataTypeHint("INT")
)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14380:
URL: https://github.com/apache/flink/pull/14380#issuecomment-73572


   
   ## CI report:
   
   * 2d5ac91ee373efed1ede7da63abbfdbbfb60aabf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10862)
 
   * 4741d372abe2f11a1639c70a6937a2421a41c41e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10874)
 
   * edf39a787c1852c1b241dc2748c8e86069b8e721 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10875)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14380:
URL: https://github.com/apache/flink/pull/14380#issuecomment-73572


   
   ## CI report:
   
   * 2d5ac91ee373efed1ede7da63abbfdbbfb60aabf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10862)
 
   * 4741d372abe2f11a1639c70a6937a2421a41c41e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10874)
 
   * edf39a787c1852c1b241dc2748c8e86069b8e721 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14380:
URL: https://github.com/apache/flink/pull/14380#issuecomment-73572


   
   ## CI report:
   
   * 2d5ac91ee373efed1ede7da63abbfdbbfb60aabf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10862)
 
   * 4741d372abe2f11a1639c70a6937a2421a41c41e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10874)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on a change in pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


godfreyhe commented on a change in pull request #14380:
URL: https://github.com/apache/flink/pull/14380#discussion_r543012556



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecMultipleInput.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.batch;
+
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.streaming.api.operators.ChainingStrategy;
+import 
org.apache.flink.streaming.api.transformations.MultipleInputTransformation;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.planner.delegation.BatchPlanner;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecEdge;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+import org.apache.flink.table.planner.plan.nodes.exec.utils.ExecNodeUtil;
+import 
org.apache.flink.table.runtime.operators.multipleinput.BatchMultipleInputStreamOperatorFactory;
+import 
org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapperGenerator;
+import org.apache.flink.table.runtime.operators.multipleinput.input.InputSpec;
+import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
+
+import org.apache.commons.lang3.tuple.Pair;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Batch exec node for multiple input which contains a sub-graph of {@link 
ExecNode}s.
+ * The root node of the sub-graph is {@link #outputNode}, and the leaf nodes 
of the sub-graph are
+ * the output nodes of the {@link #getInputNodes()}.
+ *
+ * The following example shows a graph of {@code ExecNode}s with multiple 
input node:
+ * {@code
+ *  Sink
+ *   |
+ * +-++
+ * | ||
+ * |   Join   |
+ * | / \  | BatchExecMultipleInput
+ * |   Agg1Agg2   |
+ * ||   | |
+ * ++---+-+
+ *  |   |
+ * Exchange1 Exchange2
+ *  |   |
+ *Scan1   Scan2
+ * }
+ *
+ * The multiple input node contains three nodes: `Join`, `Agg1` and `Agg2`.
+ * `Join` is the root node ({@link #outputNode}) of the sub-graph,
+ * `Agg1` and `Agg2` are the leaf nodes of the sub-graph,
+ * `Exchange1` and `Exchange2` are the input nodes.
+ */
+public class BatchExecMultipleInput extends BatchExecNode {
+
+   private final ExecNode outputNode;
+
+   public BatchExecMultipleInput(
+   List> inputNodes,
+   List inputEdges,
+   ExecNode outputNode,

Review comment:
   yes, multiple input could support DAG, but currently it only support 
tree.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] TsReaper commented on a change in pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


TsReaper commented on a change in pull request #14380:
URL: https://github.com/apache/flink/pull/14380#discussion_r543010165



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecMultipleInput.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.batch;
+
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.streaming.api.operators.ChainingStrategy;
+import 
org.apache.flink.streaming.api.transformations.MultipleInputTransformation;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.planner.delegation.BatchPlanner;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecEdge;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+import org.apache.flink.table.planner.plan.nodes.exec.utils.ExecNodeUtil;
+import 
org.apache.flink.table.runtime.operators.multipleinput.BatchMultipleInputStreamOperatorFactory;
+import 
org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapperGenerator;
+import org.apache.flink.table.runtime.operators.multipleinput.input.InputSpec;
+import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
+
+import org.apache.commons.lang3.tuple.Pair;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Batch exec node for multiple input which contains a sub-graph of {@link 
ExecNode}s.
+ * The root node of the sub-graph is {@link #outputNode}, and the leaf nodes 
of the sub-graph are
+ * the output nodes of the {@link #getInputNodes()}.
+ *
+ * The following example shows a graph of {@code ExecNode}s with multiple 
input node:
+ * {@code
+ *  Sink
+ *   |
+ * +-++
+ * | ||
+ * |   Join   |
+ * | / \  | BatchExecMultipleInput
+ * |   Agg1Agg2   |
+ * ||   | |
+ * ++---+-+
+ *  |   |
+ * Exchange1 Exchange2
+ *  |   |
+ *Scan1   Scan2
+ * }
+ *
+ * The multiple input node contains three nodes: `Join`, `Agg1` and `Agg2`.
+ * `Join` is the root node ({@link #outputNode}) of the sub-graph,
+ * `Agg1` and `Agg2` are the leaf nodes of the sub-graph,
+ * `Exchange1` and `Exchange2` are the input nodes.
+ */
+public class BatchExecMultipleInput extends BatchExecNode {
+
+   private final ExecNode outputNode;
+
+   public BatchExecMultipleInput(
+   List> inputNodes,
+   List inputEdges,
+   ExecNode outputNode,

Review comment:
   Nope, in multiple input operators the sub-operators can form a DAG, not 
necessarily a tree.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lessonone1 commented on pull request #13026: fixFlinkSqlInsertQuerySinkFieldNotMatch

2020-12-14 Thread GitBox


lessonone1 commented on pull request #13026:
URL: https://github.com/apache/flink/pull/13026#issuecomment-745020893


   bad way 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lessonone1 closed pull request #13026: fixFlinkSqlInsertQuerySinkFieldNotMatch

2020-12-14 Thread GitBox


lessonone1 closed pull request #13026:
URL: https://github.com/apache/flink/pull/13026


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20495) Elasticsearch6DynamicSinkITCase Hang

2020-12-14 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249460#comment-17249460
 ] 

Huang Xingbo commented on FLINK-20495:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10872=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361]

 

> Elasticsearch6DynamicSinkITCase Hang
> 
>
> Key: FLINK-20495
> URL: https://issues.apache.org/jira/browse/FLINK-20495
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.13.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10535=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20]
>  
> {code:java}
> 2020-12-04T22:39:33.9748225Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase
> 2020-12-04T22:54:51.9486410Z 
> ==
> 2020-12-04T22:54:51.9488766Z Process produced no output for 900 seconds.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14380:
URL: https://github.com/apache/flink/pull/14380#issuecomment-73572


   
   ## CI report:
   
   * 2d5ac91ee373efed1ede7da63abbfdbbfb60aabf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10862)
 
   * 4741d372abe2f11a1639c70a6937a2421a41c41e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20389) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-14 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249459#comment-17249459
 ] 

Huang Xingbo commented on FLINK-20389:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10872=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20389
> URL: https://issues.apache.org/jira/browse/FLINK-20389
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Matthias
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
> Attachments: FLINK-20389-failure.log
>
>
> [Build|https://dev.azure.com/mapohl/flink/_build/results?buildId=118=results]
>  failed due to {{UnalignedCheckpointITCase}} caused by a 
> {{NullPointerException}}:
> {code:java}
> Test execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase) failed 
> with:
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> 

[GitHub] [flink] godfreyhe commented on a change in pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


godfreyhe commented on a change in pull request #14380:
URL: https://github.com/apache/flink/pull/14380#discussion_r542990713



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecMultipleInput.java
##
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.planner.delegation.StreamPlanner;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Stream exec node for multiple input which contains a sub-graph of {@link 
ExecNode}s.
+ * The root node of the sub-graph is {@link #outputNode}, and the leaf nodes 
of the sub-graph are
+ * the output nodes of the {@link #getInputNodes()}.
+ *
+ * The following example shows a graph of {@code ExecNode}s with multiple 
input node:
+ * {@code
+ *  Sink
+ *   |
+ * +-++
+ * | ||
+ * |   Join   |
+ * | / \  | BatchExecMultipleInput
+ * |   Agg1Agg2   |
+ * ||   | |
+ * ++---+-+
+ *  |   |
+ * Exchange1 Exchange2
+ *  |   |
+ *Scan1   Scan2
+ * }
+ *
+ * The multiple input node contains three nodes: `Join`, `Agg1` and `Agg2`.
+ * `Join` is the root node ({@link #outputNode}) of the sub-graph,
+ * `Agg1` and `Agg2` are the leaf nodes of the sub-graph,
+ * `Exchange1` and `Exchange2` are the input nodes.
+ */
+public class StreamExecMultipleInput extends StreamExecNode {
+
+   private final ExecNode outputNode;
+
+   public StreamExecMultipleInput(
+   List> inputNodes,
+   ExecNode outputNode,

Review comment:
   `outputNode` is used for translating the sub-graph into transformations





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on a change in pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


godfreyhe commented on a change in pull request #14380:
URL: https://github.com/apache/flink/pull/14380#discussion_r542993092



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecMultipleInput.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.batch;
+
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.streaming.api.operators.ChainingStrategy;
+import 
org.apache.flink.streaming.api.transformations.MultipleInputTransformation;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.planner.delegation.BatchPlanner;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecEdge;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+import org.apache.flink.table.planner.plan.nodes.exec.utils.ExecNodeUtil;
+import 
org.apache.flink.table.runtime.operators.multipleinput.BatchMultipleInputStreamOperatorFactory;
+import 
org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapperGenerator;
+import org.apache.flink.table.runtime.operators.multipleinput.input.InputSpec;
+import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
+
+import org.apache.commons.lang3.tuple.Pair;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Batch exec node for multiple input which contains a sub-graph of {@link 
ExecNode}s.
+ * The root node of the sub-graph is {@link #outputNode}, and the leaf nodes 
of the sub-graph are
+ * the output nodes of the {@link #getInputNodes()}.
+ *
+ * The following example shows a graph of {@code ExecNode}s with multiple 
input node:
+ * {@code
+ *  Sink
+ *   |
+ * +-++
+ * | ||
+ * |   Join   |
+ * | / \  | BatchExecMultipleInput
+ * |   Agg1Agg2   |
+ * ||   | |
+ * ++---+-+
+ *  |   |
+ * Exchange1 Exchange2
+ *  |   |
+ *Scan1   Scan2
+ * }
+ *
+ * The multiple input node contains three nodes: `Join`, `Agg1` and `Agg2`.
+ * `Join` is the root node ({@link #outputNode}) of the sub-graph,
+ * `Agg1` and `Agg2` are the leaf nodes of the sub-graph,
+ * `Exchange1` and `Exchange2` are the input nodes.
+ */
+public class BatchExecMultipleInput extends BatchExecNode {
+
+   private final ExecNode outputNode;
+
+   public BatchExecMultipleInput(
+   List> inputNodes,
+   List inputEdges,
+   ExecNode outputNode,

Review comment:
   In operator chaining, `tailNode` is appropriate, because the operators 
are list. While in multiple, the operators are graph (more strictly, it is a 
tree). so what about` rootNode` ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xiaoHoly commented on pull request #14377: [FLINK-19905][Connector][jdbc] The Jdbc-connector's 'lookup.max-retries' option initial value is 1 in JdbcLookupFunction

2020-12-14 Thread GitBox


xiaoHoly commented on pull request #14377:
URL: https://github.com/apache/flink/pull/14377#issuecomment-745005984


   hi,@dianfu. do you have time to review my code 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20576) Flink Temporal Join Hive Dim Error

2020-12-14 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249440#comment-17249440
 ] 

HideOnBush commented on FLINK-20576:


The ORC format cannot be read, but the parquet and text formats can be read 
normally

> Flink Temporal Join Hive Dim Error
> --
>
> Key: FLINK-20576
> URL: https://issues.apache.org/jira/browse/FLINK-20576
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Fix For: 1.13.0
>
>
>  
> KAFKA DDL
> {code:java}
> CREATE TABLE hive_catalog.flink_db_test.kfk_master_test (
> master Row String, action int, orderStatus int, orderKey String, actionTime bigint, 
> areaName String, paidAmount double, foodAmount double, startTime String, 
> person double, orderSubType int, checkoutTime String>,
> proctime as PROCTIME()
> ) WITH (properties ..){code}
>  
> FLINK client query sql
> {noformat}
> SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl 
>   JOIN hive_catalog.gauss.dim_extend_shop_info /*+ 
> OPTIONS('streaming-source.enable'='true', 
> 'streaming-source.partition.include' = 'latest', 
>'streaming-source.monitor-interval' = '12 
> h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME 
> AS OF kafk_tbl.proctime AS dim 
>ON kafk_tbl.groupID = dim.group_id where kafk_tbl.groupID is not 
> null;{noformat}
> When I execute the above statement, these stack error messages are returned
> Caused by: java.lang.NullPointerException: bufferCaused by: 
> java.lang.NullPointerException: buffer at 
> org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
>  ~[flink-table_2.11-1.12.0.jar:1.12.0]
>  
> Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
> into cache after 3 retriesCaused by: 
> org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
> after 3 retries at 
> org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
> org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20576) Flink Temporal Join Hive Dim Error

2020-12-14 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249440#comment-17249440
 ] 

HideOnBush edited comment on FLINK-20576 at 12/15/20, 2:11 AM:
---

The ORC format cannot be read, but the parquet and text formats can be read 
normally  [~jark]


was (Author: hideonbush):
The ORC format cannot be read, but the parquet and text formats can be read 
normally

> Flink Temporal Join Hive Dim Error
> --
>
> Key: FLINK-20576
> URL: https://issues.apache.org/jira/browse/FLINK-20576
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Fix For: 1.13.0
>
>
>  
> KAFKA DDL
> {code:java}
> CREATE TABLE hive_catalog.flink_db_test.kfk_master_test (
> master Row String, action int, orderStatus int, orderKey String, actionTime bigint, 
> areaName String, paidAmount double, foodAmount double, startTime String, 
> person double, orderSubType int, checkoutTime String>,
> proctime as PROCTIME()
> ) WITH (properties ..){code}
>  
> FLINK client query sql
> {noformat}
> SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl 
>   JOIN hive_catalog.gauss.dim_extend_shop_info /*+ 
> OPTIONS('streaming-source.enable'='true', 
> 'streaming-source.partition.include' = 'latest', 
>'streaming-source.monitor-interval' = '12 
> h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME 
> AS OF kafk_tbl.proctime AS dim 
>ON kafk_tbl.groupID = dim.group_id where kafk_tbl.groupID is not 
> null;{noformat}
> When I execute the above statement, these stack error messages are returned
> Caused by: java.lang.NullPointerException: bufferCaused by: 
> java.lang.NullPointerException: buffer at 
> org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
>  ~[flink-table_2.11-1.12.0.jar:1.12.0]
>  
> Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
> into cache after 3 retriesCaused by: 
> org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
> after 3 retries at 
> org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
> org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on a change in pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


godfreyhe commented on a change in pull request #14380:
URL: https://github.com/apache/flink/pull/14380#discussion_r542990713



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecMultipleInput.java
##
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.planner.delegation.StreamPlanner;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Stream exec node for multiple input which contains a sub-graph of {@link 
ExecNode}s.
+ * The root node of the sub-graph is {@link #outputNode}, and the leaf nodes 
of the sub-graph are
+ * the output nodes of the {@link #getInputNodes()}.
+ *
+ * The following example shows a graph of {@code ExecNode}s with multiple 
input node:
+ * {@code
+ *  Sink
+ *   |
+ * +-++
+ * | ||
+ * |   Join   |
+ * | / \  | BatchExecMultipleInput
+ * |   Agg1Agg2   |
+ * ||   | |
+ * ++---+-+
+ *  |   |
+ * Exchange1 Exchange2
+ *  |   |
+ *Scan1   Scan2
+ * }
+ *
+ * The multiple input node contains three nodes: `Join`, `Agg1` and `Agg2`.
+ * `Join` is the root node ({@link #outputNode}) of the sub-graph,
+ * `Agg1` and `Agg2` are the leaf nodes of the sub-graph,
+ * `Exchange1` and `Exchange2` are the input nodes.
+ */
+public class StreamExecMultipleInput extends StreamExecNode {
+
+   private final ExecNode outputNode;
+
+   public StreamExecMultipleInput(
+   List> inputNodes,
+   ExecNode outputNode,

Review comment:
   `outputNode` is used for translating the sub-graph into transformation





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20509) Refactor verifyPlan methods in TableTestBase

2020-12-14 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-20509.
--
Resolution: Done

master: 7b68d537b185c921fd509ab3e78afe5ed7e47c79

> Refactor verifyPlan methods in TableTestBase
> 
>
> Key: FLINK-20509
> URL: https://issues.apache.org/jira/browse/FLINK-20509
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
>
>  Currently, we use {{verifyPlan}} method to verify the plan result for both 
> {{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
> But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
> can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So 
> in order to make those methods more clear, we will do the following 
> refactoring:
> 1. replace {{planBefore}} with {{ast}} in xml file. {{ast}} is "Abstract 
> Syntax Tree", corresponding to "Abstract Syntax Tree" item in the explain 
> result; 
> 2. remove {{planAfter}}, introduce  {{optimized rel plan}} and {{optimized 
> exec plan}}. {{optimized rel plan}}  is the optimized rel plan, and is 
> similar to "Optimized Physical Plan" item in the explain result. but 
> different from "Optimized Physical Plan", {{optimized rel plan}} can 
> represent either optimized logical rel plan (for rule testing) or optimized 
> physical rel plan (for changelog validation, etc). {{optimized exec plan}} is 
> the optimized execution plan, corresponding to "Optimized Execution Plan" 
> item in the explain result. see 
> https://issues.apache.org/jira/browse/FLINK-20478 for more details about 
> explain refactor
> 3. keep {{verifyPlan}} method, which will print {{ast}}, {{optimized rel 
> plan}} and {{optimized exec plan}}. 
> 4. add {{verifyRelPlan}} method, which will print {{ast}}, {{optimized rel 
> plan}}
> 5. add {{verifyExecPlan}} method, which will print {{ast}} and {{optimized 
> exec plan}}. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20509) Refactor verifyPlan methods in TableTestBase

2020-12-14 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-20509:
---
Summary: Refactor verifyPlan methods in TableTestBase  (was: Refactor 
verifyPlan method in TableTestBase)

> Refactor verifyPlan methods in TableTestBase
> 
>
> Key: FLINK-20509
> URL: https://issues.apache.org/jira/browse/FLINK-20509
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
>
>  Currently, we use {{verifyPlan}} method to verify the plan result for both 
> {{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
> But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
> can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So 
> in order to make those methods more clear, we will do the following 
> refactoring:
> 1. replace {{planBefore}} with {{ast}} in xml file. {{ast}} is "Abstract 
> Syntax Tree", corresponding to "Abstract Syntax Tree" item in the explain 
> result; 
> 2. remove {{planAfter}}, introduce  {{optimized rel plan}} and {{optimized 
> exec plan}}. {{optimized rel plan}}  is the optimized rel plan, and is 
> similar to "Optimized Physical Plan" item in the explain result. but 
> different from "Optimized Physical Plan", {{optimized rel plan}} can 
> represent either optimized logical rel plan (for rule testing) or optimized 
> physical rel plan (for changelog validation, etc). {{optimized exec plan}} is 
> the optimized execution plan, corresponding to "Optimized Execution Plan" 
> item in the explain result. see 
> https://issues.apache.org/jira/browse/FLINK-20478 for more details about 
> explain refactor
> 3. keep {{verifyPlan}} method, which will print {{ast}}, {{optimized rel 
> plan}} and {{optimized exec plan}}. 
> 4. add {{verifyRelPlan}} method, which will print {{ast}}, {{optimized rel 
> plan}}
> 5. add {{verifyExecPlan}} method, which will print {{ast}} and {{optimized 
> exec plan}}. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe merged pull request #14360: [FLINK-20509][table-planner-blink] Refactor verifyPlan method in TableTestBase

2020-12-14 Thread GitBox


godfreyhe merged pull request #14360:
URL: https://github.com/apache/flink/pull/14360


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20385) Allow to read metadata for Canal-json format

2020-12-14 Thread wangfei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249429#comment-17249429
 ] 

wangfei commented on FLINK-20385:
-

When will this version come online?

> Allow to read metadata for Canal-json format
> 
>
> Key: FLINK-20385
> URL: https://issues.apache.org/jira/browse/FLINK-20385
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Leonard Xu
>Assignee: Nicholas Jiang
>Priority: Major
>
> In FLIP-107, we support read meta from CDC format Debezium, Canal-json is also
> another widely used CDC format , we need to support read metadata too.
>  
> The requirement comes from user-zh mail list, the user want to read meta 
> information(database table name) from Canal-json.
> [1] [http://apache-flink.147419.n8.nabble.com/canal-json-tt8939.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18906) Support checkpointing with multiple input operator chained with sources

2020-12-14 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249428#comment-17249428
 ] 

Xintong Song commented on FLINK-18906:
--

True, thank you [~pnowojski].

> Support checkpointing with multiple input operator chained with sources
> ---
>
> Key: FLINK-18906
> URL: https://issues.apache.org/jira/browse/FLINK-18906
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / Task
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Generalize `CheckpointBarrierHandlers` plus hook in sources to 
> `CheckpointBarrierHandlers`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14383: [FLINK-20589] Remove old scheduling strategies

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14383:
URL: https://github.com/apache/flink/pull/14383#issuecomment-744695966


   
   ## CI report:
   
   * b0dd813486c720d47f477968adb04322c6d6b1ba Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10869)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20606) sql client cannot create function using user classes from jar which specified by -j option

2020-12-14 Thread akisaya (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

akisaya updated FLINK-20606:

Description: 
with flink version 1.12.0(versions before also affected)

I started a sql cli  with a hive catalog and specified a user jar file with -j 
option like this:
{code:java}
bin/sql-client.sh embedded -j /Users/akis/Desktop/flink-func/myfunc.jar
{code}
{color:#ff}when i tried to create a custom function using class from 
myfunc.jar,cli reported ClassNotFoundException.{color}

 
{code:java}
Flink SQL> use catalog myhive;

Flink SQL> create function myfunc1 as 'me.aki.flink.flinkudf.MyFunc';
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: me.aki.flink.flinkudf.MyFunc
{code}
 

 

me.aki.flink.flinkudf.MyFunc is the identifier of udf,which defined like this

 
{code:java}
package me.aki.flink.flinkudf;

import org.apache.flink.table.functions.ScalarFunction;

public class MyFunc extends ScalarFunction {
public String eval(String s) {
return "myfunc_" + s;
}
}
{code}
 

 

 

after walking through the related code, I believe this is a bug caused by wrong 
classloader

 

when using a hive catalog, flink will use  
{color:#ff}CatalogFunctionImpl{color}  to wrap the function。 The

isGeneric() methed  uses {color:#ff}Class.forName(String clazzName){color} 
which will use a current classloader(classloader loads flink/lib) to determine 
the class。

 

however with -j option, user jar is set to the ExecutionContext and loaded by 
another userClassLoader

 

and the fix can be easy to pass a classloader to the Class.forName method.
{code:java}
ClassLoader cl = Thread.currentThread().getContextClassLoader();
Class c = Class.forName(className, true, cl);
{code}
after do such fix and build a new flink dist,create function behaves right

 
{code:java}
Flink SQL> select myfunc1('1');

// output
     EXPR$0
     myfunc_1
{code}
 

 

 

 

 

 

 

  was:
with flink version 1.12.0(versions before also affected)

I started a sql cli  with a hive catalog and specified a user jar file with -j 
option like this:
{code:java}
bin/sql-client.sh embedded -j /Users/akis/Desktop/flink-func/myfunc.jar
{code}
{color:#ff}when i tried to create a custom function using class from 
myfunc.jar,cli reported ClassNotFoundException.{color}

 
{code:java}
Flink SQL> use catalog myhive;

Flink SQL> create function myfunc1 as 'me.aki.flink.flinkudf.MyFunc';
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: me.aki.flink.flinkudf.MyFunc
{code}
 

 

me.aki.flink.flinkudf.MyFunc is the identifier of udf,which defined like this

 
{code:java}
package me.aki.flink.flinkudf;

import org.apache.flink.table.functions.ScalarFunction;

public class MyFunc extends ScalarFunction {
public String eval(String s) {
return "myfunc_" + s;
}
}
{code}
 

 

 

after walking through the related code, I believe this is a bug caused by wrong 
classloader

 

when using a hive catalog, flink will use  
{color:#ff}CatalogFunctionImpl{color}  to wrap the function。 The

isGeneric() methed  uses {color:#ff}Class.forName(String clazzName){color} 
which will use a current classloader(classloader loads flink/lib) to determine 
the class。

 

however with -j option, user jar is set to the ExecutionContext and loaded by 
another userClassLoader

 

and the fix can be easy to pass a classloader to the Class.forName method.
{code:java}
ClassLoader cl = Thread.currentThread().getContextClassLoader();
Class c = Class.forName(className, true, cl);
{code}
after do such fix and build a new flink dist,create function behaves right

 

 

 

 

 

 

 

 


> sql client cannot create function using user classes from jar which specified 
> by  -j option   
> --
>
> Key: FLINK-20606
> URL: https://issues.apache.org/jira/browse/FLINK-20606
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / API, Table SQL / Client
>Affects Versions: 1.10.2, 1.12.0, 1.11.2
>Reporter: akisaya
>Priority: Major
>
> with flink version 1.12.0(versions before also affected)
> I started a sql cli  with a hive catalog and specified a user jar file with 
> -j option like this:
> {code:java}
> bin/sql-client.sh embedded -j /Users/akis/Desktop/flink-func/myfunc.jar
> {code}
> {color:#ff}when i tried to create a custom function using class from 
> myfunc.jar,cli reported ClassNotFoundException.{color}
>  
> {code:java}
> Flink SQL> use catalog myhive;
> Flink SQL> create function myfunc1 as 'me.aki.flink.flinkudf.MyFunc';
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: me.aki.flink.flinkudf.MyFunc
> {code}
>  
>  
> me.aki.flink.flinkudf.MyFunc is the 

[jira] [Updated] (FLINK-20606) sql client cannot create function using user classes from jar which specified by -j option

2020-12-14 Thread akisaya (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

akisaya updated FLINK-20606:

Description: 
with flink version 1.12.0(versions before also affected)

I started a sql cli  with a hive catalog and specified a user jar file with -j 
option like this:
{code:java}
bin/sql-client.sh embedded -j /Users/akis/Desktop/flink-func/myfunc.jar
{code}
{color:#ff}when i tried to create a custom function using class from 
myfunc.jar,cli reported ClassNotFoundException.{color}

 
{code:java}
Flink SQL> use catalog myhive;

Flink SQL> create function myfunc1 as 'me.aki.flink.flinkudf.MyFunc';
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: me.aki.flink.flinkudf.MyFunc
{code}
 

 

me.aki.flink.flinkudf.MyFunc is the identifier of udf,which defined like this

 
{code:java}
package me.aki.flink.flinkudf;

import org.apache.flink.table.functions.ScalarFunction;

public class MyFunc extends ScalarFunction {
public String eval(String s) {
return "myfunc_" + s;
}
}
{code}
 

 

 

after walking through the related code, I believe this is a bug caused by wrong 
classloader

 

when using a hive catalog, flink will use  
{color:#ff}CatalogFunctionImpl{color}  to wrap the function。 The

isGeneric() methed  uses {color:#ff}Class.forName(String clazzName){color} 
which will use a current classloader(classloader loads flink/lib) to determine 
the class。

 

however with -j option, user jar is set to the ExecutionContext and loaded by 
another userClassLoader

 

and the fix can be easy to pass a classloader to the Class.forName method.
{code:java}
ClassLoader cl = Thread.currentThread().getContextClassLoader();
Class c = Class.forName(className, true, cl);
{code}
after do such fix and build a new flink dist,create function behaves right

 

 

 

 

 

 

 

 

  was:
I started a sql cli  with a hive catalog and specified a user jar file with -j 
option like this:
{code:java}
bin/sql-client.sh embedded -j /Users/akis/Desktop/flink-func/myfunc.jar
{code}
{color:#FF}when i tried to create a custom function using class from 
myfunc.jar,cli reported ClassNotFoundException.{color}

 
{code:java}
Flink SQL> use catalog myhive;

Flink SQL> create function myfunc1 as 'me.aki.flink.flinkudf.MyFunc';
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: me.aki.flink.flinkudf.MyFunc
{code}
 

 

me.aki.flink.flinkudf.MyFunc is the identifier of udf,which defined like this

 
{code:java}
package me.aki.flink.flinkudf;

import org.apache.flink.table.functions.ScalarFunction;

public class MyFunc extends ScalarFunction {
public String eval(String s) {
return "myfunc_" + s;
}
}
{code}
 

 

 

after walking through the related code, I believe this is a bug caused by wrong 
classloader

 

when using a hive catalog, flink will use  
{color:#FF}CatalogFunctionImpl{color}  to wrap the function。 The

isGeneric() methed  uses {color:#FF}Class.forName(String clazzName){color} 
which will use a current classloader(classloader loads flink/lib) to determine 
the class。

 

however with -j option, user jar is set to the ExecutionContext and loaded by 
another userClassLoader

 

and the fix can be easy to pass a classloader to the Class.forName method.
{code:java}
ClassLoader cl = Thread.currentThread().getContextClassLoader();
Class c = Class.forName(className, true, cl);
{code}
after do such fix and build a new flink dist,create function behaves right

 

 

 

 

 

 

 

 


> sql client cannot create function using user classes from jar which specified 
> by  -j option   
> --
>
> Key: FLINK-20606
> URL: https://issues.apache.org/jira/browse/FLINK-20606
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / API, Table SQL / Client
>Affects Versions: 1.10.2, 1.12.0, 1.11.2
>Reporter: akisaya
>Priority: Major
>
> with flink version 1.12.0(versions before also affected)
> I started a sql cli  with a hive catalog and specified a user jar file with 
> -j option like this:
> {code:java}
> bin/sql-client.sh embedded -j /Users/akis/Desktop/flink-func/myfunc.jar
> {code}
> {color:#ff}when i tried to create a custom function using class from 
> myfunc.jar,cli reported ClassNotFoundException.{color}
>  
> {code:java}
> Flink SQL> use catalog myhive;
> Flink SQL> create function myfunc1 as 'me.aki.flink.flinkudf.MyFunc';
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: me.aki.flink.flinkudf.MyFunc
> {code}
>  
>  
> me.aki.flink.flinkudf.MyFunc is the identifier of udf,which defined like this
>  
> {code:java}
> package me.aki.flink.flinkudf;
> import 

[jira] [Commented] (FLINK-20606) sql client cannot create function using user classes from jar which specified by -j option

2020-12-14 Thread akisaya (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249398#comment-17249398
 ] 

akisaya commented on FLINK-20606:
-

[~jark],I would like make a  pr to this issue, can you assign this to me?

> sql client cannot create function using user classes from jar which specified 
> by  -j option   
> --
>
> Key: FLINK-20606
> URL: https://issues.apache.org/jira/browse/FLINK-20606
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / API, Table SQL / Client
>Affects Versions: 1.10.2, 1.12.0, 1.11.2
>Reporter: akisaya
>Priority: Major
>
> I started a sql cli  with a hive catalog and specified a user jar file with 
> -j option like this:
> {code:java}
> bin/sql-client.sh embedded -j /Users/akis/Desktop/flink-func/myfunc.jar
> {code}
> {color:#FF}when i tried to create a custom function using class from 
> myfunc.jar,cli reported ClassNotFoundException.{color}
>  
> {code:java}
> Flink SQL> use catalog myhive;
> Flink SQL> create function myfunc1 as 'me.aki.flink.flinkudf.MyFunc';
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: me.aki.flink.flinkudf.MyFunc
> {code}
>  
>  
> me.aki.flink.flinkudf.MyFunc is the identifier of udf,which defined like this
>  
> {code:java}
> package me.aki.flink.flinkudf;
> import org.apache.flink.table.functions.ScalarFunction;
> public class MyFunc extends ScalarFunction {
> public String eval(String s) {
> return "myfunc_" + s;
> }
> }
> {code}
>  
>  
>  
> after walking through the related code, I believe this is a bug caused by 
> wrong classloader
>  
> when using a hive catalog, flink will use  
> {color:#FF}CatalogFunctionImpl{color}  to wrap the function。 The
> isGeneric() methed  uses {color:#FF}Class.forName(String 
> clazzName){color} which will use a current classloader(classloader loads 
> flink/lib) to determine the class。
>  
> however with -j option, user jar is set to the ExecutionContext and loaded by 
> another userClassLoader
>  
> and the fix can be easy to pass a classloader to the Class.forName method.
> {code:java}
> ClassLoader cl = Thread.currentThread().getContextClassLoader();
> Class c = Class.forName(className, true, cl);
> {code}
> after do such fix and build a new flink dist,create function behaves right
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20606) sql client cannot create function using user classes from jar which specified by -j option

2020-12-14 Thread akisaya (Jira)
akisaya created FLINK-20606:
---

 Summary: sql client cannot create function using user classes from 
jar which specified by  -j option   
 Key: FLINK-20606
 URL: https://issues.apache.org/jira/browse/FLINK-20606
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive, Table SQL / API, Table SQL / Client
Affects Versions: 1.11.2, 1.12.0, 1.10.2
Reporter: akisaya


I started a sql cli  with a hive catalog and specified a user jar file with -j 
option like this:
{code:java}
bin/sql-client.sh embedded -j /Users/akis/Desktop/flink-func/myfunc.jar
{code}
{color:#FF}when i tried to create a custom function using class from 
myfunc.jar,cli reported ClassNotFoundException.{color}

 
{code:java}
Flink SQL> use catalog myhive;

Flink SQL> create function myfunc1 as 'me.aki.flink.flinkudf.MyFunc';
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: me.aki.flink.flinkudf.MyFunc
{code}
 

 

me.aki.flink.flinkudf.MyFunc is the identifier of udf,which defined like this

 
{code:java}
package me.aki.flink.flinkudf;

import org.apache.flink.table.functions.ScalarFunction;

public class MyFunc extends ScalarFunction {
public String eval(String s) {
return "myfunc_" + s;
}
}
{code}
 

 

 

after walking through the related code, I believe this is a bug caused by wrong 
classloader

 

when using a hive catalog, flink will use  
{color:#FF}CatalogFunctionImpl{color}  to wrap the function。 The

isGeneric() methed  uses {color:#FF}Class.forName(String clazzName){color} 
which will use a current classloader(classloader loads flink/lib) to determine 
the class。

 

however with -j option, user jar is set to the ExecutionContext and loaded by 
another userClassLoader

 

and the fix can be easy to pass a classloader to the Class.forName method.
{code:java}
ClassLoader cl = Thread.currentThread().getContextClassLoader();
Class c = Class.forName(className, true, cl);
{code}
after do such fix and build a new flink dist,create function behaves right

 

 

 

 

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14381: [FLINK-20588] Added resource files for deploying a Flink on Mesos cluster locally.

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14381:
URL: https://github.com/apache/flink/pull/14381#issuecomment-744503895


   
   ## CI report:
   
   * e61e157c3e00951e26fb98849d641fc917a14b64 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10868)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol closed pull request #14382: [FLINK-20605][coordination] Only process acknowledgements from registered TaskExecutors

2020-12-14 Thread GitBox


zentol closed pull request #14382:
URL: https://github.com/apache/flink/pull/14382


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-playgrounds] alpinegizmo merged pull request #12: Added a guide for Windows users.

2020-12-14 Thread GitBox


alpinegizmo merged pull request #12:
URL: https://github.com/apache/flink-playgrounds/pull/12


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14383: [FLINK-20589] Remove old scheduling strategies

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14383:
URL: https://github.com/apache/flink/pull/14383#issuecomment-744695966


   
   ## CI report:
   
   * b0dd813486c720d47f477968adb04322c6d6b1ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10869)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14381: [FLINK-20588] Added resource files for deploying a Flink on Mesos cluster locally.

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14381:
URL: https://github.com/apache/flink/pull/14381#issuecomment-744503895


   
   ## CI report:
   
   * 6ea39ec0119663f4641aa5ae4074efa5d7a34322 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10865)
 
   * e61e157c3e00951e26fb98849d641fc917a14b64 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10868)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13879: [FLINK-19832][coordination] Do not schedule shared slot bulk if some slots have failed immediately

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #13879:
URL: https://github.com/apache/flink/pull/13879#issuecomment-720415853


   
   ## CI report:
   
   * 365f22a4626a975b9b556776947306db7dfdfeb1 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10866)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14383: [FLINK-20589] Remove old scheduling strategies

2020-12-14 Thread GitBox


flinkbot commented on pull request #14383:
URL: https://github.com/apache/flink/pull/14383#issuecomment-744695966


   
   ## CI report:
   
   * b0dd813486c720d47f477968adb04322c6d6b1ba UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14381: [FLINK-20588] Added resource files for deploying a Flink on Mesos cluster locally.

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14381:
URL: https://github.com/apache/flink/pull/14381#issuecomment-744503895


   
   ## CI report:
   
   * 6ea39ec0119663f4641aa5ae4074efa5d7a34322 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10865)
 
   * e61e157c3e00951e26fb98849d641fc917a14b64 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14383: [FLINK-20589] Remove old scheduling strategies

2020-12-14 Thread GitBox


flinkbot commented on pull request #14383:
URL: https://github.com/apache/flink/pull/14383#issuecomment-744690562


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b0dd813486c720d47f477968adb04322c6d6b1ba (Mon Dec 14 
20:27:39 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-20589).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-20594) Remove DefaultExecutionSlotAllocator

2020-12-14 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-20594:
--

Assignee: Robert Metzger

> Remove DefaultExecutionSlotAllocator
> 
>
> Key: FLINK-20594
> URL: https://issues.apache.org/jira/browse/FLINK-20594
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Robert Metzger
>Priority: Major
> Fix For: 1.13.0
>
>
> Remove the {{DefaultExecutionSlotAllocator}} which is only used by the legacy 
> {{SchedulingStrategies}}. Probably {{AbstractExecutionSlotAllocator}} can 
> also be removed in the same step.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20589) Remove old scheduling strategies

2020-12-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20589:
---
Labels: pull-request-available  (was: )

> Remove old scheduling strategies
> 
>
> Key: FLINK-20589
> URL: https://issues.apache.org/jira/browse/FLINK-20589
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> After the implementation of the pipelined region scheduler (FLINK-16430), we 
> no longer need the old {{LazyFromSourcesSchedulingStrategy}} and 
> {{EagerSchedulingStrategy}} strategies and the components required by it. In 
> order to simplify the maintenance, I suggest to remove these components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger opened a new pull request #14383: [FLINK-20589] Remove old scheduling strategies

2020-12-14 Thread GitBox


rmetzger opened a new pull request #14383:
URL: https://github.com/apache/flink/pull/14383


   ## What is the purpose of the change
   
   After the implementation of the pipelined region scheduler (FLINK-16430), we 
no longer need the old LazyFromSourcesSchedulingStrategy and 
EagerSchedulingStrategy strategies and the components required by it. In order 
to simplify the maintenance, I suggest to remove these components.
   
   ## Brief change log
   
   - This PR addresses the first three subtasks of FLINK-20589
   
   
   ## Verifying this change
   
   Verified by CI
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (**yes** / no / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-playgrounds] alpinegizmo commented on a change in pull request #12: Added a guide for Windows users.

2020-12-14 Thread GitBox


alpinegizmo commented on a change in pull request #12:
URL: https://github.com/apache/flink-playgrounds/pull/12#discussion_r542725837



##
File path: operations-playground/README.md
##
@@ -36,6 +36,11 @@ docker-compose up -d
 
 You can check if the playground was successfully started by accessing the 
WebUI of the Flink cluster at [http://localhost:8081](http://localhost:8081).
 
+ for Windows Users
+
+If you get the error "Unhandled exception: Filesharing has been cancelled", 
you should configure the file sharing in Docker Desktop before starting.

Review comment:
   ```suggestion
   If you get the error "Unhandled exception: Filesharing has been cancelled", 
you should configure file sharing in Docker Desktop before starting.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14382: [FLINK-20605][coordination] Only process acknowledgements from registered TaskExecutors

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14382:
URL: https://github.com/apache/flink/pull/14382#issuecomment-744654488


   
   ## CI report:
   
   * aa893307da9bafa60c73346203554de0303b11c2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10867)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14382: [FLINK-20605][coordination] Only process acknowledgements from registered TaskExecutors

2020-12-14 Thread GitBox


flinkbot commented on pull request #14382:
URL: https://github.com/apache/flink/pull/14382#issuecomment-744654488


   
   ## CI report:
   
   * aa893307da9bafa60c73346203554de0303b11c2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #14372: [FLINK-19259][Kinesis] Remove references to allow classloader unloading

2020-12-14 Thread GitBox


zentol commented on a change in pull request #14372:
URL: https://github.com/apache/flink/pull/14372#discussion_r542644960



##
File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
##
@@ -423,4 +432,43 @@ private void flushSync() throws Exception {
}
}
}
+
+   /**
+* Remove references created by the producer, preventing the 
classloader to unload. References were
+* analyzed as of version 0.14.0.
+*/
+   private void runClassLoaderReleaseHook(ClassLoader classLoader) {
+   // unregister admin mbean
+   AwsSdkMetrics.unregisterMetricAdminMBean();
+
+   try {
+   // Remove FileAgeManager
+   Class fileAgeManagerClazz = 
Class.forName("com.amazonaws.services.kinesis.producer.FileAgeManager", true, 
classLoader);

Review comment:
   ~~Shouldn't we be able to use `getClass().getClassLoader()`? Then we 
wouldn't have to modify the `RuntimeContext` API.~~





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #14372: [FLINK-19259][Kinesis] Remove references to allow classloader unloading

2020-12-14 Thread GitBox


zentol commented on a change in pull request #14372:
URL: https://github.com/apache/flink/pull/14372#discussion_r542666752



##
File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
##
@@ -423,4 +432,43 @@ private void flushSync() throws Exception {
}
}
}
+
+   /**
+* Remove references created by the producer, preventing the 
classloader to unload. References were
+* analyzed as of version 0.14.0.
+*/
+   private void runClassLoaderReleaseHook(ClassLoader classLoader) {
+   // unregister admin mbean
+   AwsSdkMetrics.unregisterMetricAdminMBean();
+
+   try {
+   // Remove FileAgeManager
+   Class fileAgeManagerClazz = 
Class.forName("com.amazonaws.services.kinesis.producer.FileAgeManager", true, 
classLoader);

Review comment:
   Ignore my previous suggestion; we shouldn't do that on the off-chance 
that the connector is loaded via /lib.
   
   Could we use `RuntimeContext#getUserCodeClassLoader` instead?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #14372: [FLINK-19259][Kinesis] Remove references to allow classloader unloading

2020-12-14 Thread GitBox


zentol commented on a change in pull request #14372:
URL: https://github.com/apache/flink/pull/14372#discussion_r542666752



##
File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
##
@@ -423,4 +432,43 @@ private void flushSync() throws Exception {
}
}
}
+
+   /**
+* Remove references created by the producer, preventing the 
classloader to unload. References were
+* analyzed as of version 0.14.0.
+*/
+   private void runClassLoaderReleaseHook(ClassLoader classLoader) {
+   // unregister admin mbean
+   AwsSdkMetrics.unregisterMetricAdminMBean();
+
+   try {
+   // Remove FileAgeManager
+   Class fileAgeManagerClazz = 
Class.forName("com.amazonaws.services.kinesis.producer.FileAgeManager", true, 
classLoader);

Review comment:
   We shouldn't do that, on the off-chance that the connector is loaded via 
/lib.
   
   Could we use `RuntimeContext#getUserCodeClassLoader` instead?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #14372: [FLINK-19259][Kinesis] Remove references to allow classloader unloading

2020-12-14 Thread GitBox


zentol commented on a change in pull request #14372:
URL: https://github.com/apache/flink/pull/14372#discussion_r542643500



##
File path: 
flink-core/src/main/java/org/apache/flink/api/common/functions/RuntimeContext.java
##
@@ -128,12 +129,11 @@
 *
 * The release hook is executed just before the user code class 
loader is being released.
 * Registration only happens if no hook has been registered under this 
name already.
-*

Review comment:
   revert

##
File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
##
@@ -423,4 +432,43 @@ private void flushSync() throws Exception {
}
}
}
+
+   /**
+* Remove references created by the producer, preventing the 
classloader to unload. References were
+* analyzed as of version 0.14.0.
+*/
+   private void runClassLoaderReleaseHook(ClassLoader classLoader) {
+   // unregister admin mbean
+   AwsSdkMetrics.unregisterMetricAdminMBean();
+
+   try {
+   // Remove FileAgeManager
+   Class fileAgeManagerClazz = 
Class.forName("com.amazonaws.services.kinesis.producer.FileAgeManager", true, 
classLoader);

Review comment:
   Shouldn't we be able to use `getClass().getClassLoader()`? Then we 
wouldn't have to modify the `RuntimeContext` API.

##
File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
##
@@ -423,4 +432,43 @@ private void flushSync() throws Exception {
}
}
}
+
+   /**
+* Remove references created by the producer, preventing the 
classloader to unload. References were
+* analyzed as of version 0.14.0.
+*/
+   private void runClassLoaderReleaseHook(ClassLoader classLoader) {
+   // unregister admin mbean

Review comment:
   ```suggestion
   ```

##
File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
##
@@ -423,4 +432,43 @@ private void flushSync() throws Exception {
}
}
}
+
+   /**
+* Remove references created by the producer, preventing the 
classloader to unload. References were
+* analyzed as of version 0.14.0.

Review comment:
   It would be good to add a version reference for which 
`aws-java-sdk-core` was used.

##
File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
##
@@ -423,4 +432,43 @@ private void flushSync() throws Exception {
}
}
}
+
+   /**
+* Remove references created by the producer, preventing the 
classloader to unload. References were
+* analyzed as of version 0.14.0.
+*/
+   private void runClassLoaderReleaseHook(ClassLoader classLoader) {
+   // unregister admin mbean
+   AwsSdkMetrics.unregisterMetricAdminMBean();
+
+   try {
+   // Remove FileAgeManager
+   Class fileAgeManagerClazz = 
Class.forName("com.amazonaws.services.kinesis.producer.FileAgeManager", true, 
classLoader);
+   Field instanceField = 
fileAgeManagerClazz.getDeclaredField("instance");
+   instanceField.setAccessible(true);
+
+   // unset (static final) field FileAgeManager.instance
+   Field modifiersField = 
Field.class.getDeclaredField("modifiers");
+   modifiersField.setAccessible(true);
+   modifiersField.setInt(instanceField, 
instanceField.getModifiers() & ~Modifier.FINAL);
+   Object fileAgeManager = instanceField.get(null);
+   instanceField.set(null, null);
+
+   // shutdown thread pool

Review comment:
   This should be the key change necessary to ensure the ClassLoader can be 
cleaned up.
   We shouldn't have to touch the FileAgeManager#instance reference; so long as 
no thread has references to them we should be good.

##
File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
##
@@ -423,4 +432,43 @@ private void flushSync() throws Exception {
}
}
}
+
+   /**
+* Remove references created by the producer, preventing the 
classloader to unload. References were
+* analyzed as of version 0.14.0.
+*/
+   private void runClassLoaderReleaseHook(ClassLoader classLoader) {
+

[GitHub] [flink] flinkbot edited a comment on pull request #14312: [FLINK-20491] Support Broadcast State in BATCH execution mode

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14312:
URL: https://github.com/apache/flink/pull/14312#issuecomment-738876739


   
   ## CI report:
   
   * b7d1707557148278604a497b07e2682b342da246 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10863)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14382: [FLINK-20605][coordination] Only process acknowledgements from registered TaskExecutors

2020-12-14 Thread GitBox


flinkbot commented on pull request #14382:
URL: https://github.com/apache/flink/pull/14382#issuecomment-744640957


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit aa893307da9bafa60c73346203554de0303b11c2 (Mon Dec 14 
18:56:10 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20605) DeclarativeSlotManager crashes if slot allocation notification is processed after taskexecutor shutdown

2020-12-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20605:
---
Labels: pull-request-available  (was: )

> DeclarativeSlotManager crashes if slot allocation notification is processed 
> after taskexecutor shutdown
> ---
>
> Key: FLINK-20605
> URL: https://issues.apache.org/jira/browse/FLINK-20605
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> It is possible that a notification from a task executor about a slot being 
> allocated can be processed after that very task executor has unregistered 
> itself from the resource manager.
> As a result we run into an exception when trying to mark this slot as 
> allocated, because it no longer exists and a precondition catches this case.
> We could solve this by checking in 
> {{DeclarativeResourceManager#allocateSlot}} whether the task executor we 
> received the acknowledge from is still registered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol opened a new pull request #14382: [FLINK-20605][coordination] Only process acknowledgements from registered TaskExecutors

2020-12-14 Thread GitBox


zentol opened a new pull request #14382:
URL: https://github.com/apache/flink/pull/14382


   It is possible that a notification from a task executor about a slot being 
allocated can be processed after that very task executor has unregistered 
itself from the resource manager.
   As a result we run into an exception when trying to mark this slot as 
allocated, because it no longer exists and a precondition catches this case.
   
   We could solve this by checking in `DeclarativeResourceManager#allocateSlot` 
whether the task executor from which we received the acknowledgement from is 
still registered.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20605) DeclarativeSlotManager crashes if slot allocation notification is processed after taskexecutor shutdown

2020-12-14 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-20605:
-
Description: 
It is possible that a notification from a task executor about a slot being 
allocated can be processed after that very task executor has unregistered 
itself from the resource manager.
As a result we run into an exception when trying to mark this slot as 
allocated, because it no longer exists and a precondition catches this case.

We could solve this by checking in {{DeclarativeResourceManager#allocateSlot}} 
whether the task executor we received the acknowledge from is still registered.

  was:
It appears to be possible that a notification from a task executor about a slot 
being allocated can be processed after that very task executor has unregistered 
itself from the resource manager.
As a result we run into an exception when trying to mark this slot as 
allocated, because it no longer exists and a precondition catches this case.

We could solve this by checking in {{DeclarativeResourceManager#allocateSlot}} 
whether the task executor we received the acknowledge from is still registered.


> DeclarativeSlotManager crashes if slot allocation notification is processed 
> after taskexecutor shutdown
> ---
>
> Key: FLINK-20605
> URL: https://issues.apache.org/jira/browse/FLINK-20605
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.13.0
>
>
> It is possible that a notification from a task executor about a slot being 
> allocated can be processed after that very task executor has unregistered 
> itself from the resource manager.
> As a result we run into an exception when trying to mark this slot as 
> allocated, because it no longer exists and a precondition catches this case.
> We could solve this by checking in 
> {{DeclarativeResourceManager#allocateSlot}} whether the task executor we 
> received the acknowledge from is still registered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20605) DeclarativeSlotManager crashes if slot allocation notification is processed after taskexecutor shutdown

2020-12-14 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-20605:


 Summary: DeclarativeSlotManager crashes if slot allocation 
notification is processed after taskexecutor shutdown
 Key: FLINK-20605
 URL: https://issues.apache.org/jira/browse/FLINK-20605
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.13.0


It appears to be possible that a notification from a task executor about a slot 
being allocated can be processed after that very task executor has unregistered 
itself from the resource manager.
As a result we run into an exception when trying to mark this slot as 
allocated, because it no longer exists and a precondition catches this case.

We could solve this by checking in {{DeclarativeResourceManager#allocateSlot}} 
whether the task executor we received the acknowledge from is still registered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20389) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-14 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249199#comment-17249199
 ] 

Robert Metzger commented on FLINK-20389:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10855=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20389
> URL: https://issues.apache.org/jira/browse/FLINK-20389
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Matthias
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
> Attachments: FLINK-20389-failure.log
>
>
> [Build|https://dev.azure.com/mapohl/flink/_build/results?buildId=118=results]
>  failed due to {{UnalignedCheckpointITCase}} caused by a 
> {{NullPointerException}}:
> {code:java}
> Test execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase) failed 
> with:
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> 

[jira] [Commented] (FLINK-20529) Publish Dockerfiles for release 1.12.0

2020-12-14 Thread roberto hashioka (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249190#comment-17249190
 ] 

roberto hashioka commented on FLINK-20529:
--

Are we good to use this image 
[https://github.com/apache/flink-docker/tree/master/1.12/scala_2.12-java11-debian]
 ? or it's still a work in progress?

> Publish Dockerfiles for release 1.12.0
> --
>
> Key: FLINK-20529
> URL: https://issues.apache.org/jira/browse/FLINK-20529
> Project: Flink
>  Issue Type: Task
>  Components: Release System
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.1
>
>
> Publish the Dockerfiles for 1.12.0 to finalize the release process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14379: [FLINK-20563][hive] Support built-in functions for Hive versions prio…

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14379:
URL: https://github.com/apache/flink/pull/14379#issuecomment-73009


   
   ## CI report:
   
   * a413199ba00fdf4b41b09e7cf4bcf02bcd6da0ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10861)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13424: [FLINK-19219] Run JobManager initialization in separate thread to make it cancellable

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #13424:
URL: https://github.com/apache/flink/pull/13424#issuecomment-694863633


   
   ## CI report:
   
   * f247ac96fef0e62881f4726e87f56291e72e883a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10858)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14380:
URL: https://github.com/apache/flink/pull/14380#issuecomment-73572


   
   ## CI report:
   
   * 2d5ac91ee373efed1ede7da63abbfdbbfb60aabf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10862)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13879: [FLINK-19832][coordination] Do not schedule shared slot bulk if some slots have failed immediately

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #13879:
URL: https://github.com/apache/flink/pull/13879#issuecomment-720415853


   
   ## CI report:
   
   * 52a32b3e279a3c17dfe865e6d09e1b57d5b29bfc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10815)
 
   * 365f22a4626a975b9b556776947306db7dfdfeb1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10866)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13879: [FLINK-19832][coordination] Do not schedule shared slot bulk if some slots have failed immediately

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #13879:
URL: https://github.com/apache/flink/pull/13879#issuecomment-720415853


   
   ## CI report:
   
   * 52a32b3e279a3c17dfe865e6d09e1b57d5b29bfc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10815)
 
   * 365f22a4626a975b9b556776947306db7dfdfeb1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14378: [FLINK-20522][table] Make implementing a built-in function straightforward

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14378:
URL: https://github.com/apache/flink/pull/14378#issuecomment-744402991


   
   ## CI report:
   
   * 0e6914c466a975c74a0c0d49f7635d7d33ef00e2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10854)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14381: [FLINK-20588] Added resource files for deploying a Flink on Mesos cluster locally.

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14381:
URL: https://github.com/apache/flink/pull/14381#issuecomment-744503895


   
   ## CI report:
   
   * 6ea39ec0119663f4641aa5ae4074efa5d7a34322 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10865)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14360: [FLINK-20509][table-planner-blink] Refactor verifyPlan method in TableTestBase

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14360:
URL: https://github.com/apache/flink/pull/14360#issuecomment-742794536


   
   ## CI report:
   
   * b35d38f035e0d4155cc958559b69e7d111463baf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10853)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14312: [FLINK-20491] Support Broadcast State in BATCH execution mode

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14312:
URL: https://github.com/apache/flink/pull/14312#issuecomment-738876739


   
   ## CI report:
   
   * 897fe1eb2bb46f63f8cea373c5b5d917070823b5 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10851)
 
   * b7d1707557148278604a497b07e2682b342da246 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10863)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14346: [FLINK-20354] Rework standalone docs pages

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14346:
URL: https://github.com/apache/flink/pull/14346#issuecomment-741680007


   
   ## CI report:
   
   * 395f8fc742495a2a4dccf2597123c3d2b1986e02 UNKNOWN
   * 8e4e06a171990c09da0438ed1b2432df5ec59989 UNKNOWN
   * 293ea9218bb6340b069cb9e55f55052b8ed4e005 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10864)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-20591) Remove JobManagerOptions.SCHEDULING_STRATEGY

2020-12-14 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249082#comment-17249082
 ] 

Till Rohrmann edited comment on FLINK-20591 at 12/14/20, 4:14 PM:
--

Yes, I would suggest doing it as you proposed it. cc [~ym]


was (Author: till.rohrmann):
Yes, I would suggest doing it as you proposed it.

> Remove JobManagerOptions.SCHEDULING_STRATEGY
> 
>
> Key: FLINK-20591
> URL: https://issues.apache.org/jira/browse/FLINK-20591
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Robert Metzger
>Priority: Major
> Fix For: 1.13.0
>
>
> In order to disable the legacy scheduling strategies we need to remove the 
> {{JobManagerOptions.SCHEDULING_STRATEGY}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20591) Remove JobManagerOptions.SCHEDULING_STRATEGY

2020-12-14 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249082#comment-17249082
 ] 

Till Rohrmann commented on FLINK-20591:
---

Yes, I would suggest doing it as you proposed it.

> Remove JobManagerOptions.SCHEDULING_STRATEGY
> 
>
> Key: FLINK-20591
> URL: https://issues.apache.org/jira/browse/FLINK-20591
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Robert Metzger
>Priority: Major
> Fix For: 1.13.0
>
>
> In order to disable the legacy scheduling strategies we need to remove the 
> {{JobManagerOptions.SCHEDULING_STRATEGY}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14381: [FLINK-20588] Added resource files for deploying a Flink on Mesos cluster locally.

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14381:
URL: https://github.com/apache/flink/pull/14381#issuecomment-744503895


   
   ## CI report:
   
   * 6ea39ec0119663f4641aa5ae4074efa5d7a34322 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10865)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20591) Remove JobManagerOptions.SCHEDULING_STRATEGY

2020-12-14 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249048#comment-17249048
 ] 

Robert Metzger commented on FLINK-20591:


It seems that the approximate local recovery only works with the legacy 
scheduler. Shall I just disable the related test & file a ticket or is there 
another plan?

> Remove JobManagerOptions.SCHEDULING_STRATEGY
> 
>
> Key: FLINK-20591
> URL: https://issues.apache.org/jira/browse/FLINK-20591
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Robert Metzger
>Priority: Major
> Fix For: 1.13.0
>
>
> In order to disable the legacy scheduling strategies we need to remove the 
> {{JobManagerOptions.SCHEDULING_STRATEGY}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19832) Improve handling of immediately failed physical slot in SlotSharingExecutionSlotAllocator

2020-12-14 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-19832:
--
Fix Version/s: 1.12.1
   1.13.0

> Improve handling of immediately failed physical slot in 
> SlotSharingExecutionSlotAllocator
> -
>
> Key: FLINK-19832
> URL: https://issues.apache.org/jira/browse/FLINK-19832
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Andrey Zagrebin
>Assignee: Andrey Zagrebin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.12.1
>
>
> Improve handling of immediately failed physical slot in 
> SlotSharingExecutionSlotAllocator
> If a physical slot future the immediately fails for a new SharedSlot in 
> SlotSharingExecutionSlotAllocator#getOrAllocateSharedSlot but we continue to 
> add logical slots to this SharedSlot, eventually, the logical slot also fails 
> and gets removed from {{the SharedSlot}} which gets released (state 
> RELEASED). The subsequent logical slot addings in the loop of 
> {{allocateLogicalSlotsFromSharedSlots}} will fail the scheduling
> with the ALLOCATED state check because it will be RELEASED.
> The subsequent bulk timeout check will also not find the SharedSlot and fail 
> with NPE.
> Hence, such SharedSlot with the immediately failed physical slot future 
> should not be kept in the SlotSharingExecutionSlotAllocator and the logical 
> slot requests depending on it can be immediately returned failed. The bulk 
> timeout check does not need to be started because if some physical (and its 
> logical) slot requests failed then the whole bulk will be canceled by 
> scheduler.
> If the last assumption is not true for the future scheduling, this bulk 
> failure might need additional explicit pending requests cancelation. We 
> expect to refactor it for the declarative scheduling anyways.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20591) Remove JobManagerOptions.SCHEDULING_STRATEGY

2020-12-14 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-20591:
--

Assignee: Robert Metzger

> Remove JobManagerOptions.SCHEDULING_STRATEGY
> 
>
> Key: FLINK-20591
> URL: https://issues.apache.org/jira/browse/FLINK-20591
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Robert Metzger
>Priority: Major
> Fix For: 1.13.0
>
>
> In order to disable the legacy scheduling strategies we need to remove the 
> {{JobManagerOptions.SCHEDULING_STRATEGY}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14381: [FLINK-20588] Added resource files for deploying a Flink on Mesos cluster locally.

2020-12-14 Thread GitBox


flinkbot commented on pull request #14381:
URL: https://github.com/apache/flink/pull/14381#issuecomment-744503895


   
   ## CI report:
   
   * 6ea39ec0119663f4641aa5ae4074efa5d7a34322 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14346: [FLINK-20354] Rework standalone docs pages

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14346:
URL: https://github.com/apache/flink/pull/14346#issuecomment-741680007


   
   ## CI report:
   
   * 9ca5fa15b0fd8c3ca8d76672866c6d8db2623be8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10847)
 
   * 395f8fc742495a2a4dccf2597123c3d2b1986e02 UNKNOWN
   * 8e4e06a171990c09da0438ed1b2432df5ec59989 UNKNOWN
   * 293ea9218bb6340b069cb9e55f55052b8ed4e005 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10864)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14312: [FLINK-20491] Support Broadcast State in BATCH execution mode

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14312:
URL: https://github.com/apache/flink/pull/14312#issuecomment-738876739


   
   ## CI report:
   
   * 6d0f9c6a32d65c82e657cfcab048a98aa4445429 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10849)
 
   * 897fe1eb2bb46f63f8cea373c5b5d917070823b5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10851)
 
   * b7d1707557148278604a497b07e2682b342da246 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10863)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20584) Document: Streaming File Sink - ORC Format, example of Scala, file is empty after being written

2020-12-14 Thread lundong he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249028#comment-17249028
 ] 

lundong he commented on FLINK-20584:


I have fixed this problem and created a pull request, the related link is 
below: 

https://github.com/apache/flink/pull/14374

> Document: Streaming File Sink - ORC Format, example of Scala, file is empty 
> after being written
> ---
>
> Key: FLINK-20584
> URL: https://issues.apache.org/jira/browse/FLINK-20584
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0, 1.12.0, 1.11.1, 1.11.2
>Reporter: lundong he
>Priority: Minor
>  Labels: pull-request-available
>
> [https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/connectors/streamfile_sink.html]
>  
> According to the current code of Scala, the ORC file is empty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14381: [FLINK-20588] Added resource files for deploying a Flink on Mesos cluster locally.

2020-12-14 Thread GitBox


flinkbot commented on pull request #14381:
URL: https://github.com/apache/flink/pull/14381#issuecomment-744496137


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6ea39ec0119663f4641aa5ae4074efa5d7a34322 (Mon Dec 14 
14:57:50 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20588) Add docker-compose as appendix to Mesos documentation

2020-12-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20588:
---
Labels: pull-request-available  (was: )

> Add docker-compose as appendix to Mesos documentation
> -
>
> Key: FLINK-20588
> URL: https://issues.apache.org/jira/browse/FLINK-20588
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Mesos, Documentation
>Affects Versions: 1.13.0
>Reporter: Matthias
>Assignee: Matthias
>Priority: Major
>  Labels: pull-request-available
>
> Dockerfile and docker-compose.yml can be added to the Mesos documentation as 
> appendix to provide an easy entry point for starting to work with 
> Mesos/Marathon.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] XComp opened a new pull request #14381: [FLINK-20588] Added resource files for deploying a Flink on Mesos cluster locally.

2020-12-14 Thread GitBox


XComp opened a new pull request #14381:
URL: https://github.com/apache/flink/pull/14381


   ## What is the purpose of the change
   
   The configuration files enable the user to run Flink on Mesos locally.
   
   ## Brief change log
   
   The Mesos documentation was appended.
   
   
   ## Verifying this change
   
   I ran the whole process of creating the Mesos cluster and deploying Flink on 
it + starting a job locally.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14312: [FLINK-20491] Support Broadcast State in BATCH execution mode

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14312:
URL: https://github.com/apache/flink/pull/14312#issuecomment-738876739


   
   ## CI report:
   
   * 9f3deb69af6892fc676aa87e1d1b55981eded305 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10814)
 
   * 6d0f9c6a32d65c82e657cfcab048a98aa4445429 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10849)
 
   * 897fe1eb2bb46f63f8cea373c5b5d917070823b5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10851)
 
   * b7d1707557148278604a497b07e2682b342da246 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-20593) Remove EagerSchedulingStrategy

2020-12-14 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-20593:
--

Assignee: Robert Metzger

> Remove EagerSchedulingStrategy
> --
>
> Key: FLINK-20593
> URL: https://issues.apache.org/jira/browse/FLINK-20593
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Robert Metzger
>Priority: Major
> Fix For: 1.13.0
>
>
> The old {{SchedulingStrategy}} {{EagerSchedulingStrategy}} is no longer 
> needed and can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20541) ClusterID is not used in the method!

2020-12-14 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-20541:
-
Fix Version/s: (was: 1.11.3)

> ClusterID is not used in the method!
> 
>
> Key: FLINK-20541
> URL: https://issues.apache.org/jira/browse/FLINK-20541
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Affects Versions: 1.11.2
>Reporter: lixiaobao
>Priority: Minor
>  Labels: pull-request-available
>
> {code:java}
> package org.apache.flink.client.cli;
> import org.apache.flink.annotation.Internal;
> import 
> org.apache.flink.client.deployment.application.ApplicationConfiguration;
> import org.apache.flink.configuration.Configuration;
> /**
>  * An interface to be used by the {@link CliFrontend}
>  * to submit user programs for execution.
>  */
> @Internal
> public interface ApplicationDeployer {
>/**
> * Submits a user program for execution and runs the main user method on 
> the cluster.
> *
> * @param configuration the configuration containing all the necessary
> *information about submitting the user program.
> * @param applicationConfiguration an {@link ApplicationConfiguration} 
> specific to
> *   the application to be executed.
> */
> void run(
>  final Configuration configuration,
>  final ApplicationConfiguration applicationConfiguration) throws 
> Exception;
> }
> {code}
>  is not used



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20592) Remove LazyFromSourcesSchedulingStrategy

2020-12-14 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-20592:
--

Assignee: Robert Metzger

> Remove LazyFromSourcesSchedulingStrategy
> 
>
> Key: FLINK-20592
> URL: https://issues.apache.org/jira/browse/FLINK-20592
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Robert Metzger
>Priority: Major
> Fix For: 1.13.0
>
>
> The {{SchedulingStrategy}} {{LazyFromSourcesSchedulingStrategy}} is no longer 
> needed. We should remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wenlong88 commented on a change in pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


wenlong88 commented on a change in pull request #14380:
URL: https://github.com/apache/flink/pull/14380#discussion_r542417579



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecMultipleInput.java
##
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.planner.delegation.StreamPlanner;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Stream exec node for multiple input which contains a sub-graph of {@link 
ExecNode}s.
+ * The root node of the sub-graph is {@link #outputNode}, and the leaf nodes 
of the sub-graph are
+ * the output nodes of the {@link #getInputNodes()}.
+ *
+ * The following example shows a graph of {@code ExecNode}s with multiple 
input node:
+ * {@code
+ *  Sink
+ *   |
+ * +-++
+ * | ||
+ * |   Join   |
+ * | / \  | BatchExecMultipleInput
+ * |   Agg1Agg2   |
+ * ||   | |
+ * ++---+-+
+ *  |   |
+ * Exchange1 Exchange2
+ *  |   |
+ *Scan1   Scan2
+ * }
+ *
+ * The multiple input node contains three nodes: `Join`, `Agg1` and `Agg2`.
+ * `Join` is the root node ({@link #outputNode}) of the sub-graph,
+ * `Agg1` and `Agg2` are the leaf nodes of the sub-graph,
+ * `Exchange1` and `Exchange2` are the input nodes.
+ */
+public class StreamExecMultipleInput extends StreamExecNode {
+
+   private final ExecNode outputNode;
+
+   public StreamExecMultipleInput(
+   List> inputNodes,
+   ExecNode outputNode,

Review comment:
   RowType output instead of outputNode, would be enough I think.

##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecMultipleInput.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.batch;
+
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.streaming.api.operators.ChainingStrategy;
+import 
org.apache.flink.streaming.api.transformations.MultipleInputTransformation;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.planner.delegation.BatchPlanner;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecEdge;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+import org.apache.flink.table.planner.plan.nodes.exec.utils.ExecNodeUtil;
+import 
org.apache.flink.table.runtime.operators.multipleinput.BatchMultipleInputStreamOperatorFactory;
+import 
org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapperGenerator;
+import org.apache.flink.table.runtime.operators.multipleinput.input.InputSpec;
+import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
+
+import org.apache.commons.lang3.tuple.Pair;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Batch exec node for multiple input which contains a sub-graph of {@link 
ExecNode}s.
+ * The root node of the sub-graph is {@link #outputNode}, and the leaf 

[GitHub] [flink] flinkbot edited a comment on pull request #14360: [FLINK-20509][table-planner-blink] Refactor verifyPlan method in TableTestBase

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14360:
URL: https://github.com/apache/flink/pull/14360#issuecomment-742794536


   
   ## CI report:
   
   * 98ab7e587e5e25f0dfdcfd7f0d59b812b4ad8606 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10848)
 
   * b35d38f035e0d4155cc958559b69e7d111463baf Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10853)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14346: [FLINK-20354] Rework standalone docs pages

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14346:
URL: https://github.com/apache/flink/pull/14346#issuecomment-741680007


   
   ## CI report:
   
   * 9ca5fa15b0fd8c3ca8d76672866c6d8db2623be8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10847)
 
   * 395f8fc742495a2a4dccf2597123c3d2b1986e02 UNKNOWN
   * 8e4e06a171990c09da0438ed1b2432df5ec59989 UNKNOWN
   * 293ea9218bb6340b069cb9e55f55052b8ed4e005 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13964: [FLINK-19314][coordination] Add DeclarativeSlotPoolBridge

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #13964:
URL: https://github.com/apache/flink/pull/13964#issuecomment-723100396


   
   ## CI report:
   
   * af771868334b2c8c74c5484b8a6e8ced72422879 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10846)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-19656) Automatically replace delimiter in metric name components

2020-12-14 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249007#comment-17249007
 ] 

Etienne Chauchot edited comment on FLINK-19656 at 12/14/20, 2:03 PM:
-

Hi [~chesnay], thanks for your answers. It is clearer now: the existing filters 
in the reporters only filter out static chars and should also filter out the 
configured per-reporter delimiter. All reporters could have a default filtering 
implementation that filter out the configured delimiter and some reporters 
could override this implementation to filter out other characters in addition 
to the delimiter


was (Author: echauchot):
Hi [~chesnay], thanks for your answers. It is clearer now: the existing filters 
in the reporters only filter out static chars and should also filter out the 
configured per-reporter delimiter. All reporters could have a default filtering 
implementation that filter out the configured delimiter and some reporters 
could override this implementation to filter out other characters.

> Automatically replace delimiter in metric name components
> -
>
> Key: FLINK-19656
> URL: https://issues.apache.org/jira/browse/FLINK-19656
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: starter
> Fix For: 1.13.0
>
>
> The metric name consists of various components (like job ID, task ID), that 
> are then joined by a delimiter(commonly {{.}}).
> The delimiter isn't just for convention, but also carries semantics for many 
> metric backends, as they organize metrics based on the delimiter.
> This can behave in unfortunate ways if the delimiter is contained with a 
> given component, as it will now be split up by the backend.
> We should automatically filter such occurrences to prevent this from 
> happening.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19656) Automatically replace delimiter in metric name components

2020-12-14 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249007#comment-17249007
 ] 

Etienne Chauchot commented on FLINK-19656:
--

Hi [~chesnay], thanks for your answers. It is clearer now: the existing filters 
in the reporters only filter out static chars and should also filter out the 
configured per-reporter delimiter. All reporters could have a default filtering 
implementation that filter out the configured delimiter and some reporters 
could override this implementation to filter out other characters.

> Automatically replace delimiter in metric name components
> -
>
> Key: FLINK-19656
> URL: https://issues.apache.org/jira/browse/FLINK-19656
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: starter
> Fix For: 1.13.0
>
>
> The metric name consists of various components (like job ID, task ID), that 
> are then joined by a delimiter(commonly {{.}}).
> The delimiter isn't just for convention, but also carries semantics for many 
> metric backends, as they organize metrics based on the delimiter.
> This can behave in unfortunate ways if the delimiter is contained with a 
> given component, as it will now be split up by the backend.
> We should automatically filter such occurrences to prevent this from 
> happening.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14376:
URL: https://github.com/apache/flink/pull/14376#issuecomment-744252765


   
   ## CI report:
   
   * 6b2d6e05fdbd4b579c68007e5001bd1cabe1dd7b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10860)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14380: [FLINK-20515][table-planner-blink] Separate the implementation of BatchExecMultipleInput and StreamExecMultipleInput

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14380:
URL: https://github.com/apache/flink/pull/14380#issuecomment-73572


   
   ## CI report:
   
   * 2d5ac91ee373efed1ede7da63abbfdbbfb60aabf Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10862)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14379: [FLINK-20563][hive] Support built-in functions for Hive versions prio…

2020-12-14 Thread GitBox


flinkbot edited a comment on pull request #14379:
URL: https://github.com/apache/flink/pull/14379#issuecomment-73009


   
   ## CI report:
   
   * a413199ba00fdf4b41b09e7cf4bcf02bcd6da0ce Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10861)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >