[jira] [Commented] (FLINK-19462) Checkpoint statistics for unfinished task snapshots

2020-12-10 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247728#comment-17247728
 ] 

Yun Tang commented on FLINK-19462:
--

[~roman_khachatryan] I cannot view your documentation and could you open the 
access of your doc?

> Checkpoint statistics for unfinished task snapshots
> ---
>
> Key: FLINK-19462
> URL: https://issues.apache.org/jira/browse/FLINK-19462
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / Metrics
>Reporter: Nico Kruber
>Priority: Major
>  Labels: usability
>
> If a checkpoint times out, there are currently no stats on the 
> not-yet-finished tasks in the Web UI, so you have to crawl into (debug?) logs.
> It would be nice to have these incomplete stats in there instead so that you 
> know quickly what was going on. I could think of these ways to accomplish 
> this:
>  * the checkpoint coordinator could ask the TMs for it after failing the 
> checkpoint or
>  * the TMs could send the stats when they notice that the checkpoint is 
> aborted
> Maybe there are more options, but I think, this improvement in general would 
> benefit debugging checkpoints.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #13932: [FLINK-19947][Connectors / Common]Support sink parallelism configuration for Print connector

2020-12-10 Thread GitBox


wuchong commented on a change in pull request #13932:
URL: https://github.com/apache/flink/pull/13932#discussion_r540750496



##
File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/factories/PrintTableSinkFactory.java
##
@@ -95,19 +96,22 @@ public DynamicTableSink createDynamicTableSink(Context 
context) {
return new PrintSink(

context.getCatalogTable().getSchema().toPhysicalRowDataType(),
options.get(PRINT_IDENTIFIER),
-   options.get(STANDARD_ERROR));
+   options.get(STANDARD_ERROR),
+   
options.getOptional(FactoryUtil.SINK_PARALLELISM).orElse(null));
}
 
private static class PrintSink implements DynamicTableSink {
 
private final DataType type;
private final String printIdentifier;
private final boolean stdErr;
+   private final Integer parallelism;

Review comment:
   ```suggestion
private final @Nullable Integer parallelism;
   ```
   
   Would be better to add `@Nullable` annotation. 
   
   See the Flink Code Style Guideline: 
https://flink.apache.org/contributing/code-style-and-quality-java.html#java-optional





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20540) The baseurl for pg database is incorrect in JdbcCatalog page

2020-12-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20540:

Component/s: Table SQL / Ecosystem

> The baseurl for pg database is incorrect in JdbcCatalog page
> 
>
> Key: FLINK-20540
> URL: https://issues.apache.org/jira/browse/FLINK-20540
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Documentation, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.1
>Reporter: zhangzhao
>Assignee: zhangzhao
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>  
> {code:java}
> //代码占位符
> import org.apache.flink.connector.jdbc.catalog.JdbcCatalog
> new JdbcCatalog(name, defaultDatabase, username, password, baseUrl){code}
>  
> The baseUrl must be endswith / when instantiate JdbcCatalog.
> But according to [Flink 
> document|https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/jdbc.html#usage-of-postgrescatalog]
>  and code comments, baseUrl should be support  format 
> {{"jdbc:postgresql://:"}}
>  
> When i use baseUrl "{{jdbc:postgresql://:}}", the error stack is:
> {code:java}
> //代码占位符
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)\\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute application.
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\\n\\t...
>  7 more\\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\\n\\t...
>  7 more\\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed connecting to 
> jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\\n\\t...
>  10 more\\nCaused by: org.apache.flink.table.api.ValidationException: Failed 
> connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog.open(AbstractJdbcCatalog.java:100)
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:191)
> org.apache.flink.table.api.internal.TableEnvImpl.registerCatalog(TableEnvImpl.scala:267)
> com.upai.jobs.TableBodySentFields.registerCatalog(TableBodySentFields.scala:25)
> com.upai.jobs.FusionGifShow$.run(FusionGifShow.scala:28)
> com.upai.jobs.FlinkTask$.delayedEndpoint$com$upai$jobs$FlinkTask$1(FlinkTask.scala:41)
> com.upai.jobs.FlinkTask$delayedInit$body.apply(FlinkTask.scala:11)
> scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
> scala.App$$anonfun$main$1.apply(App.scala:76)
> scala.App$$anonfun$main$1.apply(App.scala:76)
> 

[GitHub] [flink] flinkbot commented on pull request #14367: [FLINK-20352][dos] Add PyFlink job submission section under the Advanced CLI section.

2020-12-10 Thread GitBox


flinkbot commented on pull request #14367:
URL: https://github.com/apache/flink/pull/14367#issuecomment-743031693


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1d552b0e4757d761f98b17b3e319e5ea804b5129 (Fri Dec 11 
07:43:56 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shuiqiangchen opened a new pull request #14367: [FLINK-20352][dos] Add PyFlink job submission section under the Advanced CLI section.

2020-12-10 Thread GitBox


shuiqiangchen opened a new pull request #14367:
URL: https://github.com/apache/flink/pull/14367


   
   
   ## What is the purpose of the change
   
   *Add PyFlink job submission section under the Advanced CLI section.*
   
   
   ## Brief change log
   
   - *Added a new section named "Submitting PyFlink Jobs" under Advanced CLI 
section*
   - *Corrected the reference links in datastream_tutorial.md and 
table_api_tutorial.md*
   
![image](https://user-images.githubusercontent.com/44767915/101876361-3fcf1a00-3bc7-11eb-8966-b28bb2acdd39.png)
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14307: [FLINK-20209][web] Add tolerable failed checkpoints config to web ui

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14307:
URL: https://github.com/apache/flink/pull/14307#issuecomment-738548986


   
   ## CI report:
   
   * 6814b17ab62ad2853791829c9b5fb806f2b6fac9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10759)
 
   * 4a8bb6eeb532ce5e28869ab0c06c903552a8dccb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #14363: [hotfix][docs] fix typo in upsert-kafka docs

2020-12-10 Thread GitBox


wuchong merged pull request #14363:
URL: https://github.com/apache/flink/pull/14363


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19146) createMiniBatchTrigger() use OR ,table.exec.mini-batch.size and table.exec.mini-batch.allow-latency

2020-12-10 Thread badqiu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247717#comment-17247717
 ] 

badqiu commented on FLINK-19146:


As you said, it seems to work. But the actual test found no effect. Only size 
is in effect.

If the output result of my "group by" is less than 100, mini batch size=100. It 
will not any output

> createMiniBatchTrigger() use OR ,table.exec.mini-batch.size and 
> table.exec.mini-batch.allow-latency 
> 
>
> Key: FLINK-19146
> URL: https://issues.apache.org/jira/browse/FLINK-19146
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.1
>Reporter: badqiu
>Priority: Major
> Attachments: mini_batch_trigger_by_latency.png, 
> mini_batch_trigger_by_size.png
>
>
> Using *or* conditions, you can control the total data delay and improve 
> computing performance.
> Increase the batch size to very large, but the data delay is still within the 
> set range.
>  
>  
> table.exec.mini-batch.size is true
> =>
> (table.exec.mini-batch.size or table.exec.mini-batch.allow-latency) is true
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20509) Refactor verifyPlan method in TableTestBase

2020-12-10 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-20509:
---
Description: 
 Currently, we use {{verifyPlan}} method to verify the plan result for both 
{{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So in 
order to make those methods more clear, we will do the following refactoring:
1. replace {{planBefore}} with {{ast}} in xml file. {{ast}} is "Abstract Syntax 
Tree", corresponding to "Abstract Syntax Tree" item in the explain result; 
2. remove {{planAfter}}, introduce  {{optimized rel plan}} and {{optimized exec 
plan}}. {{optimized rel plan}}  is the optimized rel plan, and is similar to 
"Optimized Physical Plan" item in the explain result. but different from 
"Optimized Physical Plan", {{optimized rel plan}} can represent either 
optimized logical rel plan (for rule testing) or optimized physical rel plan 
(for changelog validation, etc). {{optimized exec plan}} is the optimized 
execution plan, corresponding to "Optimized Execution Plan" item in the explain 
result. see https://issues.apache.org/jira/browse/FLINK-20478 for more details 
about explain refactor
3. keep {{verifyPlan}} method, which will print {{ast}}, {{optimized rel plan}} 
and {{optimized exec plan}}. 
4. add {{verifyRelPlan}} method, which will print {{ast}}, {{optimized rel 
plan}}
5. add {{verifyExecPlan}} method, which will print {{ast}} and {{optimized exec 
plan}}. 

  was:
 Currently, we use {{verifyPlan}} method to verify the plan result for both 
{{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So in 
order to make those methods more clear, we will do the following refactoring:
1. replace {{planBefore}} with {{ast}} in xml file. {{ast}} is "Abstract Syntax 
Tree", corresponding to "Abstract Syntax Tree" item in the explain result; 
2. remove {{planAfter}}, introduce  {{optimized rel plan}} and {{optimized exec 
plan}}. {{optimized rel plan}}  is the optimized rel plan, and is similar to 
"Optimized Physical Plan" item in the explain result. but different from 
"Optimized Physical Plan", {{optimized rel plan}} can represent either 
optimized logical rel plan (for rule testing) or optimized physical rel plan 
(for changelog validation, etc). {{optimized exec plan}} is the optimized 
execution plan, corresponding to "Optimized Execution Plan" item in the explain 
result. see https://issues.apache.org/jira/browse/FLINK-20478 for more details 
about explain refactor
2. keep {{verifyPlan}} method, which will print {{ast}}, {{optimized rel plan}} 
and {{optimized exec plan}}. 
3. add {{verifyRelPlan}} method, which will print {{ast}}, {{optimized rel 
plan}}
4. add {{verifyExecPlan}} method, which will print {{ast}} and {{optimized exec 
plan}}. 


> Refactor verifyPlan method in TableTestBase
> ---
>
> Key: FLINK-20509
> URL: https://issues.apache.org/jira/browse/FLINK-20509
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
>
>  Currently, we use {{verifyPlan}} method to verify the plan result for both 
> {{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
> But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
> can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So 
> in order to make those methods more clear, we will do the following 
> refactoring:
> 1. replace {{planBefore}} with {{ast}} in xml file. {{ast}} is "Abstract 
> Syntax Tree", corresponding to "Abstract Syntax Tree" item in the explain 
> result; 
> 2. remove {{planAfter}}, introduce  {{optimized rel plan}} and {{optimized 
> exec plan}}. {{optimized rel plan}}  is the optimized rel plan, and is 
> similar to "Optimized Physical Plan" item in the explain result. but 
> different from "Optimized Physical Plan", {{optimized rel plan}} can 
> represent either optimized logical rel plan (for rule testing) or optimized 
> physical rel plan (for changelog validation, etc). {{optimized exec plan}} is 
> the optimized execution plan, corresponding to "Optimized Execution Plan" 
> item in the explain result. see 
> https://issues.apache.org/jira/browse/FLINK-20478 for more details about 
> explain refactor
> 3. keep {{verifyPlan}} method, which will print {{ast}}, {{optimized rel 
> plan}} and {{optimized exec plan}}. 
> 4. add 

[jira] [Updated] (FLINK-20509) Refactor verifyPlan method in TableTestBase

2020-12-10 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-20509:
---
Description: 
 Currently, we use {{verifyPlan}} method to verify the plan result for both 
{{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So in 
order to make those methods more clear, we will do the following refactoring:
1. replace {{planBefore}} with {{ast}} in xml file. {{ast}} is "Abstract Syntax 
Tree", corresponding to "Abstract Syntax Tree" item in the explain result; 
2. remove {{planAfter}}, introduce  {{optimized rel plan}} and {{optimized exec 
plan}}. {{optimized rel plan}}  is the optimized rel plan, and is similar to 
"Optimized Physical Plan" item in the explain result. but different from 
"Optimized Physical Plan", {{optimized rel plan}} can represent either 
optimized logical rel plan (for rule testing) or optimized physical rel plan 
(for changelog validation, etc). {{optimized exec plan}} is the optimized 
execution plan, corresponding to "Optimized Execution Plan" item in the explain 
result. see https://issues.apache.org/jira/browse/FLINK-20478 for more details 
about explain refactor
2. keep {{verifyPlan}} method, which will print {{ast}}, {{optimized rel plan}} 
and {{optimized exec plan}}. 
3. add {{verifyRelPlan}} method, which will print {{ast}}, {{optimized rel 
plan}}
4. add {{verifyExecPlan}} method, which will print {{ast}} and {{optimized exec 
plan}}. 

  was:
 Currently, we use {{verifyPlan}} method to verify the plan result for both 
{{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So in 
order to make those methods more clear, we will introduce {{verifyRelPlan}} and 
{{verifyExecPlan}} to check 

the {{verifyPlan}} method will be separated into two methods, {{verifyRelPlan}} 
for verifying the {{RelNode}} plan, and {{verifyExecPlan}} for verifying the 
{{ExecNode}} plan. 


> Refactor verifyPlan method in TableTestBase
> ---
>
> Key: FLINK-20509
> URL: https://issues.apache.org/jira/browse/FLINK-20509
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
>
>  Currently, we use {{verifyPlan}} method to verify the plan result for both 
> {{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
> But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
> can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So 
> in order to make those methods more clear, we will do the following 
> refactoring:
> 1. replace {{planBefore}} with {{ast}} in xml file. {{ast}} is "Abstract 
> Syntax Tree", corresponding to "Abstract Syntax Tree" item in the explain 
> result; 
> 2. remove {{planAfter}}, introduce  {{optimized rel plan}} and {{optimized 
> exec plan}}. {{optimized rel plan}}  is the optimized rel plan, and is 
> similar to "Optimized Physical Plan" item in the explain result. but 
> different from "Optimized Physical Plan", {{optimized rel plan}} can 
> represent either optimized logical rel plan (for rule testing) or optimized 
> physical rel plan (for changelog validation, etc). {{optimized exec plan}} is 
> the optimized execution plan, corresponding to "Optimized Execution Plan" 
> item in the explain result. see 
> https://issues.apache.org/jira/browse/FLINK-20478 for more details about 
> explain refactor
> 2. keep {{verifyPlan}} method, which will print {{ast}}, {{optimized rel 
> plan}} and {{optimized exec plan}}. 
> 3. add {{verifyRelPlan}} method, which will print {{ast}}, {{optimized rel 
> plan}}
> 4. add {{verifyExecPlan}} method, which will print {{ast}} and {{optimized 
> exec plan}}. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] coolderli commented on pull request #14307: [FLINK-20209][web] Add tolerable failed checkpoints config to web ui

2020-12-10 Thread GitBox


coolderli commented on pull request #14307:
URL: https://github.com/apache/flink/pull/14307#issuecomment-743027033


   > The REST API documentation needs to be updated (the .snapshot file just 
for testing the API stability). (see 
https://github.com/apache/flink/tree/master/flink-docs#rest-api-documentation)
   
   Now it's updated. Thank you!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] coolderli commented on pull request #14307: [FLINK-20209][web] Add tolerable failed checkpoints config to web ui

2020-12-10 Thread GitBox


coolderli commented on pull request #14307:
URL: https://github.com/apache/flink/pull/14307#issuecomment-743026598


   > @coolderli I noticed that the latest PR still combines two git user ids: 
`coolderli` and `XIAOMIlipeidian` as I pointed out in last comment. Which user 
id do you prefer to count as contribution? Please consider to re-push the 
commit with correct information.
   
   Thank you for reminding me. That's because I configured the global 
user.email. Now I've fixed it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] coolderli commented on pull request #14307: [FLINK-20209][web] Add tolerable failed checkpoints config to web ui

2020-12-10 Thread GitBox


coolderli commented on pull request #14307:
URL: https://github.com/apache/flink/pull/14307#issuecomment-743025320


   @flinkbot  run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20557) Support statement set in SQL CLI

2020-12-10 Thread Fangliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247712#comment-17247712
 ] 

Fangliang Liu commented on FLINK-20557:
---

[~jark] Can you assign this to me ?

i will finish the following SQL syntax.
{code:java}
BEGIN STATEMENT SET;
>    INSERT INTO ...;
>    INSERT INTO ...;
> END;
{code}

> Support statement set in SQL CLI
> 
>
> Key: FLINK-20557
> URL: https://issues.apache.org/jira/browse/FLINK-20557
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Client
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.13.0
>
>
> Support to submit multiple insert into in a single job on SQL CLI, this can 
> be done by support statement set syntax in SQL CLI. 
> The syntax had been discussed and reached an consensus on the mailing list: 
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-SQL-Syntax-for-Table-API-StatementSet-td42515.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-20554) The Checkpointed Data Size of the Latest Completed Checkpoint is incorrectly displayed on the Overview page of the UI

2020-12-10 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang resolved FLINK-20554.
--
Resolution: Fixed

> The Checkpointed Data Size of the Latest Completed Checkpoint is incorrectly 
> displayed on the Overview page of the UI
> -
>
> Key: FLINK-20554
> URL: https://issues.apache.org/jira/browse/FLINK-20554
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.11.0
>Reporter: ming li
>Assignee: ming li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.3, 1.12.1
>
> Attachments: image-2020-12-10-11-57-56-888.png
>
>
> The {{Checkpointed Data Size}} of the {{Latest Completed Checkpoint}} always 
> shows '-' in the {{Overview}} of the UI.
> !image-2020-12-10-11-57-56-888.png|width=862,height=104!
> I think it should be {{state_size}} instead of {{checkpointed_data_size}} in 
> the 
> code([https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/src/app/pages/job/checkpoints/job-checkpoints.component.html#L52]),
>  which should fix this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20554) The Checkpointed Data Size of the Latest Completed Checkpoint is incorrectly displayed on the Overview page of the UI

2020-12-10 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247708#comment-17247708
 ] 

Yun Tang commented on FLINK-20554:
--

Merged
master: 91e81eee9b3096580f6ff830b6b6401cdfd594a4
release-1.11: 1ead23dc2ce4da209cba6e2869a53e2088a8334b
release-1.12: c46b06b74313bcdea2e1c2de043c63a29d693ef7

> The Checkpointed Data Size of the Latest Completed Checkpoint is incorrectly 
> displayed on the Overview page of the UI
> -
>
> Key: FLINK-20554
> URL: https://issues.apache.org/jira/browse/FLINK-20554
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.11.0
>Reporter: ming li
>Assignee: ming li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.3, 1.12.1
>
> Attachments: image-2020-12-10-11-57-56-888.png
>
>
> The {{Checkpointed Data Size}} of the {{Latest Completed Checkpoint}} always 
> shows '-' in the {{Overview}} of the UI.
> !image-2020-12-10-11-57-56-888.png|width=862,height=104!
> I think it should be {{state_size}} instead of {{checkpointed_data_size}} in 
> the 
> code([https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/src/app/pages/job/checkpoints/job-checkpoints.component.html#L52]),
>  which should fix this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19146) createMiniBatchTrigger() use OR ,table.exec.mini-batch.size and table.exec.mini-batch.allow-latency

2020-12-10 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247709#comment-17247709
 ] 

Jark Wu commented on FLINK-19146:
-

Sorry [~badqiu]. I don't get what's the problem. Currently , minibatch is 
already triggered by size or allow-latency is satisfied. 

> createMiniBatchTrigger() use OR ,table.exec.mini-batch.size and 
> table.exec.mini-batch.allow-latency 
> 
>
> Key: FLINK-19146
> URL: https://issues.apache.org/jira/browse/FLINK-19146
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.1
>Reporter: badqiu
>Priority: Major
> Attachments: mini_batch_trigger_by_latency.png, 
> mini_batch_trigger_by_size.png
>
>
> Using *or* conditions, you can control the total data delay and improve 
> computing performance.
> Increase the batch size to very large, but the data delay is still within the 
> set range.
>  
>  
> table.exec.mini-batch.size is true
> =>
> (table.exec.mini-batch.size or table.exec.mini-batch.allow-latency) is true
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19146) createMiniBatchTrigger() use OR ,table.exec.mini-batch.size and table.exec.mini-batch.allow-latency

2020-12-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-19146.
---
Resolution: Not A Problem

> createMiniBatchTrigger() use OR ,table.exec.mini-batch.size and 
> table.exec.mini-batch.allow-latency 
> 
>
> Key: FLINK-19146
> URL: https://issues.apache.org/jira/browse/FLINK-19146
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.1
>Reporter: badqiu
>Priority: Major
> Attachments: mini_batch_trigger_by_latency.png, 
> mini_batch_trigger_by_size.png
>
>
> Using *or* conditions, you can control the total data delay and improve 
> computing performance.
> Increase the batch size to very large, but the data delay is still within the 
> set range.
>  
>  
> table.exec.mini-batch.size is true
> =>
> (table.exec.mini-batch.size or table.exec.mini-batch.allow-latency) is true
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Myasuka closed pull request #14356: [FLINK-20554][webui] Corrected the Checkpointed Data Size display of Latest Completed Checkpoint on the Overview page

2020-12-10 Thread GitBox


Myasuka closed pull request #14356:
URL: https://github.com/apache/flink/pull/14356


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20455) Add check to LicenseChecker for top level /LICENSE files in shaded jars

2020-12-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-20455:
-
Fix Version/s: 1.11.3

> Add check to LicenseChecker for top level /LICENSE files in shaded jars
> ---
>
> Key: FLINK-20455
> URL: https://issues.apache.org/jira/browse/FLINK-20455
> Project: Flink
>  Issue Type: Task
>  Components: Build System / CI
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.3, 1.13.0
>
>
> During the release verification of the 1.12.0 release, we noticed several 
> modules containing LICENSE files in the jar file, which are not Apache 
> licenses.
> This could mislead users that the JARs are licensed not according to the ASL, 
> but something else.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20509) Refactor verifyPlan method in TableTestBase

2020-12-10 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-20509:
---
Description: 
 Currently, we use {{verifyPlan}} method to verify the plan result for both 
{{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So in 
order to make those methods more clear, we will introduce {{verifyRelPlan}} and 
{{verifyExecPlan}} to check 

the {{verifyPlan}} method will be separated into two methods, {{verifyRelPlan}} 
for verifying the {{RelNode}} plan, and {{verifyExecPlan}} for verifying the 
{{ExecNode}} plan. 

  was: Currently, we use {{verifyPlan}} method to verify the plan result for 
both {{RelNode}} plan and {{ExecNode}} plan, because their instances are the 
same. But once the implementation of {{RelNode}} and {{ExecNode}} are 
separated, we can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on 
{{ExecNode}} plan. So in order to make those methods more clear, the 
{{verifyPlan}} method will be separated into two methods, {{verifyRelPlan}} for 
verifying the {{RelNode}} plan, and {{verifyExecPlan}} for verifying the 
{{ExecNode}} plan. 


> Refactor verifyPlan method in TableTestBase
> ---
>
> Key: FLINK-20509
> URL: https://issues.apache.org/jira/browse/FLINK-20509
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
>
>  Currently, we use {{verifyPlan}} method to verify the plan result for both 
> {{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
> But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
> can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So 
> in order to make those methods more clear, we will introduce 
> {{verifyRelPlan}} and {{verifyExecPlan}} to check 
> the {{verifyPlan}} method will be separated into two methods, 
> {{verifyRelPlan}} for verifying the {{RelNode}} plan, and {{verifyExecPlan}} 
> for verifying the {{ExecNode}} plan. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20213) Partition commit is delayed when records keep coming

2020-12-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-20213:
-
Fix Version/s: 1.11.3

> Partition commit is delayed when records keep coming
> 
>
> Key: FLINK-20213
> URL: https://issues.apache.org/jira/browse/FLINK-20213
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Table SQL / Ecosystem
>Affects Versions: 1.11.2
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.3
>
> Attachments: image-2020-11-26-12-00-23-542.png, 
> image-2020-11-26-12-00-55-829.png
>
>
> When set partition-commit.delay=0, Users expect partitions to be committed 
> immediately.
> However, if the record of this partition continues to flow in, the bucket for 
> the partition will be activated, and no inactive bucket will appear.
> We need to consider listening to bucket created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20572) HiveCatalog should be a standalone module

2020-12-10 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-20572:
---
Description: Currently HiveCatalog is the only implementation that supports 
persistent metadata. It's possible that users just want to use HiveCatalog to 
manage metadata, and doesn't intend to read/write Hive tables. However 
HiveCatalog is part of Hive connector which requires lots of Hive dependencies, 
and introducing these dependencies increases the chance of lib conflicts. We 
should investigate whether we can move HiveCatalog to a light-weight standalone 
module.

> HiveCatalog should be a standalone module
> -
>
> Key: FLINK-20572
> URL: https://issues.apache.org/jira/browse/FLINK-20572
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.13.0
>
>
> Currently HiveCatalog is the only implementation that supports persistent 
> metadata. It's possible that users just want to use HiveCatalog to manage 
> metadata, and doesn't intend to read/write Hive tables. However HiveCatalog 
> is part of Hive connector which requires lots of Hive dependencies, and 
> introducing these dependencies increases the chance of lib conflicts. We 
> should investigate whether we can move HiveCatalog to a light-weight 
> standalone module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20389) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-10 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247704#comment-17247704
 ] 

Robert Metzger commented on FLINK-20389:


Reopen or new ticket? 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10779=logs=baf26b34-3c6a-54e8-f93f-cf269b32f802=6dff16b1-bf54-58f3-23c6-76282f49a185=4382

{code}
Caused by: java.lang.NullPointerException
at 
org.apache.flink.streaming.api.operators.SourceOperator.notifyCheckpointAborted(SourceOperator.java:299)
at 
org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointAborted(SubtaskCheckpointCoordinatorImpl.java:311)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointAbortAsync$12(StreamTask.java:968)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$13(StreamTask.java:977)
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:78)
{code}

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20389
> URL: https://issues.apache.org/jira/browse/FLINK-20389
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Matthias
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
> Attachments: FLINK-20389-failure.log
>
>
> [Build|https://dev.azure.com/mapohl/flink/_build/results?buildId=118=results]
>  failed due to {{UnalignedCheckpointITCase}} caused by a 
> {{NullPointerException}}:
> {code:java}
> Test execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase) failed 
> with:
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> 

[jira] [Updated] (FLINK-20564) Add metrics for ElasticSearch connector

2020-12-10 Thread Peidian Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peidian Li updated FLINK-20564:
---
Labels: features  (was: )

> Add metrics for ElasticSearch connector 
> 
>
> Key: FLINK-20564
> URL: https://issues.apache.org/jira/browse/FLINK-20564
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0
>Reporter: Peidian Li
>Priority: Major
>  Labels: features
>
> The current ElasticSearch connector lacks some metric, could we add some 
> metrics such as P95、P99、and the failed number of BulkRequest.
> We can implement it in the 
> [BulkProcessorListener|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBase.java#L389]
>  callback function.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20572) HiveCatalog should be a standalone module

2020-12-10 Thread Rui Li (Jira)
Rui Li created FLINK-20572:
--

 Summary: HiveCatalog should be a standalone module
 Key: FLINK-20572
 URL: https://issues.apache.org/jira/browse/FLINK-20572
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive
Reporter: Rui Li
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20509) Refactor verifyPlan method in TableTestBase

2020-12-10 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247698#comment-17247698
 ] 

Jark Wu commented on FLINK-20509:
-

I'm fine with {{ast, optimized rel plan, exec plan}} and {{verifyRelPlan}}, 
{{verifyExecPlan}}.

> Refactor verifyPlan method in TableTestBase
> ---
>
> Key: FLINK-20509
> URL: https://issues.apache.org/jira/browse/FLINK-20509
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
>
>  Currently, we use {{verifyPlan}} method to verify the plan result for both 
> {{RelNode}} plan and {{ExecNode}} plan, because their instances are the same. 
> But once the implementation of {{RelNode}} and {{ExecNode}} are separated, we 
> can't get {{ESTIMATED_COST}} and {{CHANGELOG_MODE}} on {{ExecNode}} plan. So 
> in order to make those methods more clear, the {{verifyPlan}} method will be 
> separated into two methods, {{verifyRelPlan}} for verifying the {{RelNode}} 
> plan, and {{verifyExecPlan}} for verifying the {{ExecNode}} plan. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20540) The baseurl for pg database is incorrect in JdbcCatalog page

2020-12-10 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-20540:
---
Summary: The baseurl for pg database is incorrect in JdbcCatalog page  
(was: The baseurl for pg database is incorrect in )

> The baseurl for pg database is incorrect in JdbcCatalog page
> 
>
> Key: FLINK-20540
> URL: https://issues.apache.org/jira/browse/FLINK-20540
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Documentation
>Affects Versions: 1.12.0, 1.11.1
>Reporter: zhangzhao
>Assignee: zhangzhao
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>  
> {code:java}
> //代码占位符
> import org.apache.flink.connector.jdbc.catalog.JdbcCatalog
> new JdbcCatalog(name, defaultDatabase, username, password, baseUrl){code}
>  
> The baseUrl must be endswith / when instantiate JdbcCatalog.
> But according to [Flink 
> document|https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/jdbc.html#usage-of-postgrescatalog]
>  and code comments, baseUrl should be support  format 
> {{"jdbc:postgresql://:"}}
>  
> When i use baseUrl "{{jdbc:postgresql://:}}", the error stack is:
> {code:java}
> //代码占位符
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)\\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute application.
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\\n\\t...
>  7 more\\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\\n\\t...
>  7 more\\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed connecting to 
> jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\\n\\t...
>  10 more\\nCaused by: org.apache.flink.table.api.ValidationException: Failed 
> connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog.open(AbstractJdbcCatalog.java:100)
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:191)
> org.apache.flink.table.api.internal.TableEnvImpl.registerCatalog(TableEnvImpl.scala:267)
> com.upai.jobs.TableBodySentFields.registerCatalog(TableBodySentFields.scala:25)
> com.upai.jobs.FusionGifShow$.run(FusionGifShow.scala:28)
> com.upai.jobs.FlinkTask$.delayedEndpoint$com$upai$jobs$FlinkTask$1(FlinkTask.scala:41)
> com.upai.jobs.FlinkTask$delayedInit$body.apply(FlinkTask.scala:11)
> scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
> scala.App$$anonfun$main$1.apply(App.scala:76)
> 

[jira] [Updated] (FLINK-20540) The baseurl for pg database is incorrect in

2020-12-10 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-20540:
---
Summary: The baseurl for pg database is incorrect in   (was: Failed 
connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC)

> The baseurl for pg database is incorrect in 
> 
>
> Key: FLINK-20540
> URL: https://issues.apache.org/jira/browse/FLINK-20540
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Documentation
>Affects Versions: 1.12.0, 1.11.1
>Reporter: zhangzhao
>Assignee: zhangzhao
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>  
> {code:java}
> //代码占位符
> import org.apache.flink.connector.jdbc.catalog.JdbcCatalog
> new JdbcCatalog(name, defaultDatabase, username, password, baseUrl){code}
>  
> The baseUrl must be endswith / when instantiate JdbcCatalog.
> But according to [Flink 
> document|https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/jdbc.html#usage-of-postgrescatalog]
>  and code comments, baseUrl should be support  format 
> {{"jdbc:postgresql://:"}}
>  
> When i use baseUrl "{{jdbc:postgresql://:}}", the error stack is:
> {code:java}
> //代码占位符
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)\\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute application.
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\\n\\t...
>  7 more\\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\\n\\t...
>  7 more\\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed connecting to 
> jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\\n\\t...
>  10 more\\nCaused by: org.apache.flink.table.api.ValidationException: Failed 
> connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog.open(AbstractJdbcCatalog.java:100)
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:191)
> org.apache.flink.table.api.internal.TableEnvImpl.registerCatalog(TableEnvImpl.scala:267)
> com.upai.jobs.TableBodySentFields.registerCatalog(TableBodySentFields.scala:25)
> com.upai.jobs.FusionGifShow$.run(FusionGifShow.scala:28)
> com.upai.jobs.FlinkTask$.delayedEndpoint$com$upai$jobs$FlinkTask$1(FlinkTask.scala:41)
> com.upai.jobs.FlinkTask$delayedInit$body.apply(FlinkTask.scala:11)
> scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
> scala.App$$anonfun$main$1.apply(App.scala:76)
> 

[jira] [Updated] (FLINK-20540) Failed connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC

2020-12-10 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-20540:
---
Component/s: Documentation

> Failed connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via 
> JDBC
> --
>
> Key: FLINK-20540
> URL: https://issues.apache.org/jira/browse/FLINK-20540
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Documentation
>Affects Versions: 1.12.0, 1.11.1
>Reporter: zhangzhao
>Assignee: zhangzhao
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>  
> {code:java}
> //代码占位符
> import org.apache.flink.connector.jdbc.catalog.JdbcCatalog
> new JdbcCatalog(name, defaultDatabase, username, password, baseUrl){code}
>  
> The baseUrl must be endswith / when instantiate JdbcCatalog.
> But according to [Flink 
> document|https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/jdbc.html#usage-of-postgrescatalog]
>  and code comments, baseUrl should be support  format 
> {{"jdbc:postgresql://:"}}
>  
> When i use baseUrl "{{jdbc:postgresql://:}}", the error stack is:
> {code:java}
> //代码占位符
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)\\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute application.
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\\n\\t...
>  7 more\\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\\n\\t...
>  7 more\\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed connecting to 
> jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\\n\\t...
>  10 more\\nCaused by: org.apache.flink.table.api.ValidationException: Failed 
> connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog.open(AbstractJdbcCatalog.java:100)
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:191)
> org.apache.flink.table.api.internal.TableEnvImpl.registerCatalog(TableEnvImpl.scala:267)
> com.upai.jobs.TableBodySentFields.registerCatalog(TableBodySentFields.scala:25)
> com.upai.jobs.FusionGifShow$.run(FusionGifShow.scala:28)
> com.upai.jobs.FlinkTask$.delayedEndpoint$com$upai$jobs$FlinkTask$1(FlinkTask.scala:41)
> com.upai.jobs.FlinkTask$delayedInit$body.apply(FlinkTask.scala:11)
> scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
> scala.App$$anonfun$main$1.apply(App.scala:76)
> scala.App$$anonfun$main$1.apply(App.scala:76)
> 

[jira] [Updated] (FLINK-20540) Failed connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC

2020-12-10 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-20540:
---
Priority: Minor  (was: Major)

> Failed connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via 
> JDBC
> --
>
> Key: FLINK-20540
> URL: https://issues.apache.org/jira/browse/FLINK-20540
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.12.0, 1.11.1
>Reporter: zhangzhao
>Assignee: zhangzhao
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>  
> {code:java}
> //代码占位符
> import org.apache.flink.connector.jdbc.catalog.JdbcCatalog
> new JdbcCatalog(name, defaultDatabase, username, password, baseUrl){code}
>  
> The baseUrl must be endswith / when instantiate JdbcCatalog.
> But according to [Flink 
> document|https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/jdbc.html#usage-of-postgrescatalog]
>  and code comments, baseUrl should be support  format 
> {{"jdbc:postgresql://:"}}
>  
> When i use baseUrl "{{jdbc:postgresql://:}}", the error stack is:
> {code:java}
> //代码占位符
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)\\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute application.
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\\n\\t...
>  7 more\\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\\n\\t...
>  7 more\\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed connecting to 
> jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\\n\\t...
>  10 more\\nCaused by: org.apache.flink.table.api.ValidationException: Failed 
> connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog.open(AbstractJdbcCatalog.java:100)
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:191)
> org.apache.flink.table.api.internal.TableEnvImpl.registerCatalog(TableEnvImpl.scala:267)
> com.upai.jobs.TableBodySentFields.registerCatalog(TableBodySentFields.scala:25)
> com.upai.jobs.FusionGifShow$.run(FusionGifShow.scala:28)
> com.upai.jobs.FlinkTask$.delayedEndpoint$com$upai$jobs$FlinkTask$1(FlinkTask.scala:41)
> com.upai.jobs.FlinkTask$delayedInit$body.apply(FlinkTask.scala:11)
> scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
> scala.App$$anonfun$main$1.apply(App.scala:76)
> scala.App$$anonfun$main$1.apply(App.scala:76)
> 

[GitHub] [flink] flinkbot edited a comment on pull request #14366: Update tableApi.md

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14366:
URL: https://github.com/apache/flink/pull/14366#issuecomment-742969512


   
   ## CI report:
   
   * a8da3d7492c1ef1e4c2ae157e399428141b3b8a1 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10790)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20570) The `NOTE` tip style is different from the others in process_function page.

2020-12-10 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-20570:
---
Component/s: API / DataStream

> The `NOTE` tip style is different from the others in process_function page.
> ---
>
> Key: FLINK-20570
> URL: https://issues.apache.org/jira/browse/FLINK-20570
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Documentation
>Affects Versions: 1.12.0
>Reporter: shizhengchao
>Priority: Minor
>
> in `/docs/stream/operators/process_function.md`, line 252.  The `NOTE` css 
> style is different from the others.
> {code:java}
> current is: **NOTE:**
> and another style is : Note
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20570) The `NOTE` tip style is different from the others in process_function page.

2020-12-10 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-20570:
---
Summary: The `NOTE` tip style is different from the others in 
process_function page.  (was:  `/docs/stream/operators/process_function.md`, 
line 252. The `NOTE tip` css style is different from the others.)

> The `NOTE` tip style is different from the others in process_function page.
> ---
>
> Key: FLINK-20570
> URL: https://issues.apache.org/jira/browse/FLINK-20570
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.12.0
>Reporter: shizhengchao
>Priority: Minor
>
> in `/docs/stream/operators/process_function.md`, line 252.  The `NOTE` css 
> style is different from the others.
> {code:java}
> current is: **NOTE:**
> and another style is : Note
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] cmdares commented on pull request #11999: [FLINK-14100][jdbc] Added Oracle dialect

2020-12-10 Thread GitBox


cmdares commented on pull request #11999:
URL: https://github.com/apache/flink/pull/11999#issuecomment-742999433


   
https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/dev/table/connectors/jdbc.html
   
   Database | Upsert Grammar
   -- | --
   MySQL | INSERT .. ON DUPLICATE KEY UPDATE ..
   PostgreSQL | INSERT .. ON CONFLICT .. DO UPDATE SET ..
   **Oracle|Merge into
   merge into [target-table] A using [source-table sql] B on ([conditional 
expression] and [...]...)
   when matched then [update sql] 
   when not matched then [insert sql]** 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] V1ncentzzZ commented on a change in pull request #14366: Update tableApi.md

2020-12-10 Thread GitBox


V1ncentzzZ commented on a change in pull request #14366:
URL: https://github.com/apache/flink/pull/14366#discussion_r540715362



##
File path: docs/dev/table/tableApi.md
##
@@ -1279,7 +1279,7 @@ Table result = left.join(right)
 
 {% highlight java %}
 // register User-Defined Table Function
-TableFunction split = new MySplitUDTF();
+TableFunction> split = new MySplitUDTF();

Review comment:
   Chinese documents also need to be modified in the same way.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangxlong commented on pull request #14363: [hotfix][docs] fix typo in upsert-kafka docs

2020-12-10 Thread GitBox


wangxlong commented on pull request #14363:
URL: https://github.com/apache/flink/pull/14363#issuecomment-742992433


   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20505) Yarn provided lib does not work with http paths.

2020-12-10 Thread zoucao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247670#comment-17247670
 ] 

zoucao commented on FLINK-20505:


hi [~xintongsong], I will change the pattern and do some tests, just make sure 
http paths can work.

At the same time, thanks for [~ZhenqiuHuang]'s reply and advice.

> Yarn provided lib does not work with http paths.
> 
>
> Key: FLINK-20505
> URL: https://issues.apache.org/jira/browse/FLINK-20505
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Xintong Song
>Assignee: zoucao
>Priority: Major
>
> If an http path is used for provided lib, the following exception will be 
> thrown on the resource manager side:
> {code:java}
> 2020-12-04 17:01:28.955 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container containerXX.
> org.apache.flink.util.FlinkException: Error to parse 
> YarnLocalResourceDescriptor from YarnLocalResourceDescriptor{key=X.jar, 
> path=https://XXX.jar, size=-1, modificationTime=0, visibility=APPLICATION}
>     at 
> org.apache.flink.yarn.YarnLocalResourceDescriptor.fromString(YarnLocalResourceDescriptor.java:99)
>     at 
> org.apache.flink.yarn.Utils.decodeYarnLocalResourceDescriptorListFromString(Utils.java:721)
>     at org.apache.flink.yarn.Utils.createTaskExecutorContext(Utils.java:626)
>     at 
> org.apache.flink.yarn.YarnResourceManager.getOrCreateContainerLaunchContext(YarnResourceManager.java:746)
>     at 
> org.apache.flink.yarn.YarnResourceManager.createTaskExecutorLaunchContext(YarnResourceManager.java:726)
>     at 
> org.apache.flink.yarn.YarnResourceManager.startTaskExecutorInContainer(YarnResourceManager.java:500)
>     at 
> org.apache.flink.yarn.YarnResourceManager.onContainersOfResourceAllocated(YarnResourceManager.java:455)
>     at 
> org.apache.flink.yarn.YarnResourceManager.lambda$onContainersAllocated$1(YarnResourceManager.java:415)
> {code}
> The problem is that, `HttpFileSystem#getFilsStatus` returns file status with 
> length `-1`, while `YarnLocalResourceDescriptor` does not recognize the 
> negative file length.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20571) Add dynamic open/close LatencyMarksEmitter to support online debug and monitoring

2020-12-10 Thread zlzhang0122 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zlzhang0122 updated FLINK-20571:

Description: Now, flink has provided latency metrics to monitor the 
latency, but this function mainly used in debugging contexts rather than in 
production contexts because of throughput effect. If we can provider an api 
that can dynamic open/close this function, then we can monitor the online data 
latency and find out the performance bottleneck in time without restart the 
job, which maybe helpful.   (was: Now, flink has provided latency metrics to 
monitor the latency, but this function mainly used in debugging contexts rather 
than in production contexts because of throughput effect. If we can provider an 
api that can dynamic open/close this function, then we can monitor the online 
data latency and find out the performance bottleneck in time without restart 
the job, which maybe very helpful. )

> Add dynamic open/close LatencyMarksEmitter to support online debug and 
> monitoring
> -
>
> Key: FLINK-20571
> URL: https://issues.apache.org/jira/browse/FLINK-20571
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: zlzhang0122
>Priority: Major
>
> Now, flink has provided latency metrics to monitor the latency, but this 
> function mainly used in debugging contexts rather than in production contexts 
> because of throughput effect. If we can provider an api that can dynamic 
> open/close this function, then we can monitor the online data latency and 
> find out the performance bottleneck in time without restart the job, which 
> maybe helpful. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20571) Add dynamic open/close LatencyMarksEmitter to support online debug and monitoring

2020-12-10 Thread zlzhang0122 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zlzhang0122 updated FLINK-20571:

Description: Now, flink has provided latency metrics to monitor the 
latency, but this function mainly used in debugging contexts rather than in 
production contexts because of throughput effect. If we can provider an api 
that can dynamic open/close this function, then we can monitor the online data 
latency and find out the performance bottleneck in time without restart the 
job, which maybe very helpful.   (was: Now, flink has provided latency metrics 
to monitor the latency, but this function mainly used in debugging contexts 
rather than in production contexts because of throughput effect. If we can 
provider an api that can dynamic open/close this function, then we can monitor 
the online data latency and find out the performance bottleneck in time with 
out restart the job, which maybe very helpful. )

> Add dynamic open/close LatencyMarksEmitter to support online debug and 
> monitoring
> -
>
> Key: FLINK-20571
> URL: https://issues.apache.org/jira/browse/FLINK-20571
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: zlzhang0122
>Priority: Major
>
> Now, flink has provided latency metrics to monitor the latency, but this 
> function mainly used in debugging contexts rather than in production contexts 
> because of throughput effect. If we can provider an api that can dynamic 
> open/close this function, then we can monitor the online data latency and 
> find out the performance bottleneck in time without restart the job, which 
> maybe very helpful. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14365: [hotfix][filesystem]improve the filesystem connector doc

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14365:
URL: https://github.com/apache/flink/pull/14365#issuecomment-742964328


   
   ## CI report:
   
   * c70e4e2b95cde7e6ad012819ba303c6d5b77b694 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10789)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14358: [FLINK-20561][docs] Add documentation for `records-lag-max` metric.

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14358:
URL: https://github.com/apache/flink/pull/14358#issuecomment-742572892


   
   ## CI report:
   
   * 97328897a804b3e088e7f431230b2c5ab1c45cfd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10786)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20571) Add dynamic open/close LatencyMarksEmitter to support online debug and monitoring

2020-12-10 Thread zlzhang0122 (Jira)
zlzhang0122 created FLINK-20571:
---

 Summary: Add dynamic open/close LatencyMarksEmitter to support 
online debug and monitoring
 Key: FLINK-20571
 URL: https://issues.apache.org/jira/browse/FLINK-20571
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream
Reporter: zlzhang0122


Now, flink has provided latency metrics to monitor the latency, but this 
function mainly used in debugging contexts rather than in production contexts 
because of throughput effect. If we can provider an api that can dynamic 
open/close this function, then we can monitor the online data latency and find 
out the performance bottleneck in time with out restart the job, which maybe 
very helpful. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20570) `/docs/stream/operators/process_function.md`, line 252. The `NOTE tip` css style is different from the others.

2020-12-10 Thread shizhengchao (Jira)
shizhengchao created FLINK-20570:


 Summary:  `/docs/stream/operators/process_function.md`, line 252. 
The `NOTE tip` css style is different from the others.
 Key: FLINK-20570
 URL: https://issues.apache.org/jira/browse/FLINK-20570
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.12.0
Reporter: shizhengchao


in `/docs/stream/operators/process_function.md`, line 252.  The `NOTE` css 
style is different from the others.
{code:java}
current is: **NOTE:**

and another style is : Note
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14366: Update tableApi.md

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14366:
URL: https://github.com/apache/flink/pull/14366#issuecomment-742969512


   
   ## CI report:
   
   * a8da3d7492c1ef1e4c2ae157e399428141b3b8a1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10790)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14361: [FLINK-19435][connectors/jdbc] Fix deadlock when loading different driver classes concurrently using Class.forName

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14361:
URL: https://github.com/apache/flink/pull/14361#issuecomment-742899629


   
   ## CI report:
   
   * fc55e236955b84396bed5f18851cd9dd0425060a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10783)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14366: Update tableApi.md

2020-12-10 Thread GitBox


flinkbot commented on pull request #14366:
URL: https://github.com/apache/flink/pull/14366#issuecomment-742969512


   
   ## CI report:
   
   * a8da3d7492c1ef1e4c2ae157e399428141b3b8a1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14365: [hotfix][filesystem]improve the filesystem connector doc

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14365:
URL: https://github.com/apache/flink/pull/14365#issuecomment-742964328


   
   ## CI report:
   
   * c70e4e2b95cde7e6ad012819ba303c6d5b77b694 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10789)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20505) Yarn provided lib does not work with http paths.

2020-12-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247630#comment-17247630
 ] 

Xintong Song commented on FLINK-20505:
--

[~ZhenqiuHuang],

Yes, I think that's what we are planning to do. If there's no other places in 
Flink that assumes non-negative file length, this simple change should solve 
the problem.

I was asking because you mentioned about the classpath. Looks like we are on 
the same page now. Thanks both of you.

[~zoucao], I've assigned you. Please move ahead.

> Yarn provided lib does not work with http paths.
> 
>
> Key: FLINK-20505
> URL: https://issues.apache.org/jira/browse/FLINK-20505
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Xintong Song
>Assignee: zoucao
>Priority: Major
>
> If an http path is used for provided lib, the following exception will be 
> thrown on the resource manager side:
> {code:java}
> 2020-12-04 17:01:28.955 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container containerXX.
> org.apache.flink.util.FlinkException: Error to parse 
> YarnLocalResourceDescriptor from YarnLocalResourceDescriptor{key=X.jar, 
> path=https://XXX.jar, size=-1, modificationTime=0, visibility=APPLICATION}
>     at 
> org.apache.flink.yarn.YarnLocalResourceDescriptor.fromString(YarnLocalResourceDescriptor.java:99)
>     at 
> org.apache.flink.yarn.Utils.decodeYarnLocalResourceDescriptorListFromString(Utils.java:721)
>     at org.apache.flink.yarn.Utils.createTaskExecutorContext(Utils.java:626)
>     at 
> org.apache.flink.yarn.YarnResourceManager.getOrCreateContainerLaunchContext(YarnResourceManager.java:746)
>     at 
> org.apache.flink.yarn.YarnResourceManager.createTaskExecutorLaunchContext(YarnResourceManager.java:726)
>     at 
> org.apache.flink.yarn.YarnResourceManager.startTaskExecutorInContainer(YarnResourceManager.java:500)
>     at 
> org.apache.flink.yarn.YarnResourceManager.onContainersOfResourceAllocated(YarnResourceManager.java:455)
>     at 
> org.apache.flink.yarn.YarnResourceManager.lambda$onContainersAllocated$1(YarnResourceManager.java:415)
> {code}
> The problem is that, `HttpFileSystem#getFilsStatus` returns file status with 
> length `-1`, while `YarnLocalResourceDescriptor` does not recognize the 
> negative file length.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20505) Yarn provided lib does not work with http paths.

2020-12-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reassigned FLINK-20505:


Assignee: zoucao  (was: Xintong Song)

> Yarn provided lib does not work with http paths.
> 
>
> Key: FLINK-20505
> URL: https://issues.apache.org/jira/browse/FLINK-20505
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Xintong Song
>Assignee: zoucao
>Priority: Major
>
> If an http path is used for provided lib, the following exception will be 
> thrown on the resource manager side:
> {code:java}
> 2020-12-04 17:01:28.955 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container containerXX.
> org.apache.flink.util.FlinkException: Error to parse 
> YarnLocalResourceDescriptor from YarnLocalResourceDescriptor{key=X.jar, 
> path=https://XXX.jar, size=-1, modificationTime=0, visibility=APPLICATION}
>     at 
> org.apache.flink.yarn.YarnLocalResourceDescriptor.fromString(YarnLocalResourceDescriptor.java:99)
>     at 
> org.apache.flink.yarn.Utils.decodeYarnLocalResourceDescriptorListFromString(Utils.java:721)
>     at org.apache.flink.yarn.Utils.createTaskExecutorContext(Utils.java:626)
>     at 
> org.apache.flink.yarn.YarnResourceManager.getOrCreateContainerLaunchContext(YarnResourceManager.java:746)
>     at 
> org.apache.flink.yarn.YarnResourceManager.createTaskExecutorLaunchContext(YarnResourceManager.java:726)
>     at 
> org.apache.flink.yarn.YarnResourceManager.startTaskExecutorInContainer(YarnResourceManager.java:500)
>     at 
> org.apache.flink.yarn.YarnResourceManager.onContainersOfResourceAllocated(YarnResourceManager.java:455)
>     at 
> org.apache.flink.yarn.YarnResourceManager.lambda$onContainersAllocated$1(YarnResourceManager.java:415)
> {code}
> The problem is that, `HttpFileSystem#getFilsStatus` returns file status with 
> length `-1`, while `YarnLocalResourceDescriptor` does not recognize the 
> negative file length.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20505) Yarn provided lib does not work with http paths.

2020-12-10 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247626#comment-17247626
 ] 

Zhenqiu Huang edited comment on FLINK-20505 at 12/11/20, 4:41 AM:
--

[~xintongsong]
Sorry for the late response. We use the HTTP filesystem for remote udf for a 
while. I think the error is probably due to the negative size of file can't be 
parsed. Should we just change "size=([\\d])"  to  "size=([\\-?d])" in line 44?

private static final Pattern LOCAL_RESOURCE_DESC_FORMAT = 
Pattern.compile("YarnLocalResourceDescriptor\\{" +
"key=(\\S+), path=(\\S+), {color:#DE350B}size=([\\d]+){color}, 
modificationTime=([\\d]+), visibility=(\\S+), type=(\\S+)}"); 




was (Author: zhenqiuhuang):
[~xintongsong]
Sorry for the late response. We use HTTP filesystem for remote udf for a while. 
I think the error is probably due to the negative size of file can't be parsed. 
Should we just change "size=([\\d])"  to  "size=([\\-?d])" in line 44?

private static final Pattern LOCAL_RESOURCE_DESC_FORMAT = 
Pattern.compile("YarnLocalResourceDescriptor\\{" +
"key=(\\S+), path=(\\S+), {color:#DE350B}size=([\\d]+){color}, 
modificationTime=([\\d]+), visibility=(\\S+), type=(\\S+)}"); 



> Yarn provided lib does not work with http paths.
> 
>
> Key: FLINK-20505
> URL: https://issues.apache.org/jira/browse/FLINK-20505
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Major
>
> If an http path is used for provided lib, the following exception will be 
> thrown on the resource manager side:
> {code:java}
> 2020-12-04 17:01:28.955 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container containerXX.
> org.apache.flink.util.FlinkException: Error to parse 
> YarnLocalResourceDescriptor from YarnLocalResourceDescriptor{key=X.jar, 
> path=https://XXX.jar, size=-1, modificationTime=0, visibility=APPLICATION}
>     at 
> org.apache.flink.yarn.YarnLocalResourceDescriptor.fromString(YarnLocalResourceDescriptor.java:99)
>     at 
> org.apache.flink.yarn.Utils.decodeYarnLocalResourceDescriptorListFromString(Utils.java:721)
>     at org.apache.flink.yarn.Utils.createTaskExecutorContext(Utils.java:626)
>     at 
> org.apache.flink.yarn.YarnResourceManager.getOrCreateContainerLaunchContext(YarnResourceManager.java:746)
>     at 
> org.apache.flink.yarn.YarnResourceManager.createTaskExecutorLaunchContext(YarnResourceManager.java:726)
>     at 
> org.apache.flink.yarn.YarnResourceManager.startTaskExecutorInContainer(YarnResourceManager.java:500)
>     at 
> org.apache.flink.yarn.YarnResourceManager.onContainersOfResourceAllocated(YarnResourceManager.java:455)
>     at 
> org.apache.flink.yarn.YarnResourceManager.lambda$onContainersAllocated$1(YarnResourceManager.java:415)
> {code}
> The problem is that, `HttpFileSystem#getFilsStatus` returns file status with 
> length `-1`, while `YarnLocalResourceDescriptor` does not recognize the 
> negative file length.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14365: [hotfix][filesystem]improve the filesystem connector doc

2020-12-10 Thread GitBox


flinkbot commented on pull request #14365:
URL: https://github.com/apache/flink/pull/14365#issuecomment-742964328


   
   ## CI report:
   
   * c70e4e2b95cde7e6ad012819ba303c6d5b77b694 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14364: [FLINK-20473][web]when get metrics option, its hard to see the full name unless u choosed

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14364:
URL: https://github.com/apache/flink/pull/14364#issuecomment-742958685


   
   ## CI report:
   
   * 60f83e5f3fd66bb0e61d55e9d264bf78982b34ef Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10788)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20505) Yarn provided lib does not work with http paths.

2020-12-10 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247626#comment-17247626
 ] 

Zhenqiu Huang commented on FLINK-20505:
---

[~xintongsong]
Sorry for the late response. We use HTTP filesystem for remote udf for a while. 
I think the error is probably due to the negative size of file can't be parsed. 
Should we just change "size=([\\d])"  to  "size=([\\-?d])" in line 44?

private static final Pattern LOCAL_RESOURCE_DESC_FORMAT = 
Pattern.compile("YarnLocalResourceDescriptor\\{" +
"key=(\\S+), path=(\\S+), {color:#DE350B}size=([\\d]+){color}, 
modificationTime=([\\d]+), visibility=(\\S+), type=(\\S+)}"); 



> Yarn provided lib does not work with http paths.
> 
>
> Key: FLINK-20505
> URL: https://issues.apache.org/jira/browse/FLINK-20505
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Major
>
> If an http path is used for provided lib, the following exception will be 
> thrown on the resource manager side:
> {code:java}
> 2020-12-04 17:01:28.955 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container containerXX.
> org.apache.flink.util.FlinkException: Error to parse 
> YarnLocalResourceDescriptor from YarnLocalResourceDescriptor{key=X.jar, 
> path=https://XXX.jar, size=-1, modificationTime=0, visibility=APPLICATION}
>     at 
> org.apache.flink.yarn.YarnLocalResourceDescriptor.fromString(YarnLocalResourceDescriptor.java:99)
>     at 
> org.apache.flink.yarn.Utils.decodeYarnLocalResourceDescriptorListFromString(Utils.java:721)
>     at org.apache.flink.yarn.Utils.createTaskExecutorContext(Utils.java:626)
>     at 
> org.apache.flink.yarn.YarnResourceManager.getOrCreateContainerLaunchContext(YarnResourceManager.java:746)
>     at 
> org.apache.flink.yarn.YarnResourceManager.createTaskExecutorLaunchContext(YarnResourceManager.java:726)
>     at 
> org.apache.flink.yarn.YarnResourceManager.startTaskExecutorInContainer(YarnResourceManager.java:500)
>     at 
> org.apache.flink.yarn.YarnResourceManager.onContainersOfResourceAllocated(YarnResourceManager.java:455)
>     at 
> org.apache.flink.yarn.YarnResourceManager.lambda$onContainersAllocated$1(YarnResourceManager.java:415)
> {code}
> The problem is that, `HttpFileSystem#getFilsStatus` returns file status with 
> length `-1`, while `YarnLocalResourceDescriptor` does not recognize the 
> negative file length.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14366: Update tableApi.md

2020-12-10 Thread GitBox


flinkbot commented on pull request #14366:
URL: https://github.com/apache/flink/pull/14366#issuecomment-742963464


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a8da3d7492c1ef1e4c2ae157e399428141b3b8a1 (Fri Dec 11 
04:38:05 UTC 2020)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] appleyuchi opened a new pull request #14366: Update tableApi.md

2020-12-10 Thread GitBox


appleyuchi opened a new pull request #14366:
URL: https://github.com/apache/flink/pull/14366


   
https://issues.apache.org/jira/browse/FLINK-20567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang edited a comment on pull request #14365: [hotfix][filesystem]improve the filesystem connector doc

2020-12-10 Thread GitBox


leonardBang edited a comment on pull request #14365:
URL: https://github.com/apache/flink/pull/14365#issuecomment-742960206


   Thanks @zhisheng17 for the contribution, LGTM, could you rebase and squash 
this to one commit?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on pull request #14365: [hotfix][filesystem]improve the filesystem connector doc

2020-12-10 Thread GitBox


leonardBang commented on pull request #14365:
URL: https://github.com/apache/flink/pull/14365#issuecomment-742960206


   Thanks @JingsongLi for the contribution, LGTM, could you rebase and squash 
this to one commit?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14364: [FLINK-20473][web]when get metrics option, its hard to see the full name unless u choosed

2020-12-10 Thread GitBox


flinkbot commented on pull request #14364:
URL: https://github.com/apache/flink/pull/14364#issuecomment-742958685


   
   ## CI report:
   
   * 60f83e5f3fd66bb0e61d55e9d264bf78982b34ef UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14365: [hotfix][filesystem]improve the filesystem connector doc

2020-12-10 Thread GitBox


flinkbot commented on pull request #14365:
URL: https://github.com/apache/flink/pull/14365#issuecomment-742958613


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 0eb70dd026763043041f26c35be2d9ceb45f40a8 (Fri Dec 11 
04:19:59 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhisheng17 opened a new pull request #14365: [hotfix][filesystem]improve the filesystem connector doc

2020-12-10 Thread GitBox


zhisheng17 opened a new pull request #14365:
URL: https://github.com/apache/flink/pull/14365


   
   
   ## What is the purpose of the change
   
   improve the filesystem connector doc



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14363: [hotfix] fix typo in upsert-kafka docs

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14363:
URL: https://github.com/apache/flink/pull/14363#issuecomment-742929862


   
   ## CI report:
   
   * 6e07217855efaee26691fd8c1d15d0a0a7650e02 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10787)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14358: [FLINK-20561][docs] Add documentation for `records-lag-max` metric.

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14358:
URL: https://github.com/apache/flink/pull/14358#issuecomment-742572892


   
   ## CI report:
   
   * 97328897a804b3e088e7f431230b2c5ab1c45cfd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10786)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14364: [FLINK-20473][web]when get metrics option, its hard to see the full name unless u choosed

2020-12-10 Thread GitBox


flinkbot commented on pull request #14364:
URL: https://github.com/apache/flink/pull/14364#issuecomment-742953482


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 60f83e5f3fd66bb0e61d55e9d264bf78982b34ef (Fri Dec 11 
04:02:03 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-20473).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20505) Yarn provided lib does not work with http paths.

2020-12-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247619#comment-17247619
 ] 

Xintong Song commented on FLINK-20505:
--

[~ZhenqiuHuang],

It there any further comments/concerns from your side?

> Yarn provided lib does not work with http paths.
> 
>
> Key: FLINK-20505
> URL: https://issues.apache.org/jira/browse/FLINK-20505
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Major
>
> If an http path is used for provided lib, the following exception will be 
> thrown on the resource manager side:
> {code:java}
> 2020-12-04 17:01:28.955 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container containerXX.
> org.apache.flink.util.FlinkException: Error to parse 
> YarnLocalResourceDescriptor from YarnLocalResourceDescriptor{key=X.jar, 
> path=https://XXX.jar, size=-1, modificationTime=0, visibility=APPLICATION}
>     at 
> org.apache.flink.yarn.YarnLocalResourceDescriptor.fromString(YarnLocalResourceDescriptor.java:99)
>     at 
> org.apache.flink.yarn.Utils.decodeYarnLocalResourceDescriptorListFromString(Utils.java:721)
>     at org.apache.flink.yarn.Utils.createTaskExecutorContext(Utils.java:626)
>     at 
> org.apache.flink.yarn.YarnResourceManager.getOrCreateContainerLaunchContext(YarnResourceManager.java:746)
>     at 
> org.apache.flink.yarn.YarnResourceManager.createTaskExecutorLaunchContext(YarnResourceManager.java:726)
>     at 
> org.apache.flink.yarn.YarnResourceManager.startTaskExecutorInContainer(YarnResourceManager.java:500)
>     at 
> org.apache.flink.yarn.YarnResourceManager.onContainersOfResourceAllocated(YarnResourceManager.java:455)
>     at 
> org.apache.flink.yarn.YarnResourceManager.lambda$onContainersAllocated$1(YarnResourceManager.java:415)
> {code}
> The problem is that, `HttpFileSystem#getFilsStatus` returns file status with 
> length `-1`, while `YarnLocalResourceDescriptor` does not recognize the 
> negative file length.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20473) when get metrics option, its hard to see the full name unless u choosed

2020-12-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20473:
---
Labels: pull-request-available  (was: )

> when get metrics option, its hard to see the full name unless u choosed
> ---
>
> Key: FLINK-20473
> URL: https://issues.apache.org/jira/browse/FLINK-20473
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.11.2
>Reporter: tonychan
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2020-12-04-09-45-16-219.png
>
>
> wish have a more friendly way to see the full name 
> !image-2020-12-04-09-45-16-219.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zlzhang0122 opened a new pull request #14364: [FLINK-20473][web]when get metrics option, its hard to see the full name unless u choosed

2020-12-10 Thread GitBox


zlzhang0122 opened a new pull request #14364:
URL: https://github.com/apache/flink/pull/14364


   ## What is the purpose of the change
   
   This pull request makes user can see the metrics full name when they choose 
the metrics option, this situation happens when the metrics full is too long 
and the select widget cannot shows it.
   
   
   ## Brief change log
   *modify the web ui metrics tab, when cursor hover on the  select widget 
option and display the full metrics name.
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20569) testKafkaSourceSinkWithMetadata hangs

2020-12-10 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-20569:


 Summary: testKafkaSourceSinkWithMetadata hangs
 Key: FLINK-20569
 URL: https://issues.apache.org/jira/browse/FLINK-20569
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Table SQL / Ecosystem
Affects Versions: 1.12.0, 1.13.0
Reporter: Huang Xingbo


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10781=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=f266c805-9429-58ed-2f9e-482e7b82f58b]
{code:java}
2020-12-10T23:10:46.7788275Z Test testKafkaSourceSinkWithMetadata[legacy = 
false, format = 
csv](org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase) is 
running.
2020-12-10T23:10:46.7789360Z 

2020-12-10T23:10:46.7790602Z 23:10:46,776 [main] INFO  
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl [] - 
Creating topic metadata_topic_csv
2020-12-10T23:10:47.1145296Z 23:10:47,112 [main] WARN  
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Property 
[transaction.timeout.ms] not specified. Setting it to 360 ms
2020-12-10T23:10:47.1683896Z 23:10:47,166 [Sink: 
Sink(table=[default_catalog.default_database.kafka], fields=[physical_1, 
physical_2, physical_3, headers, timestamp]) (1/1)#0] WARN  
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Using 
AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE 
semantic.
2020-12-10T23:10:47.2087733Z 23:10:47,206 [Sink: 
Sink(table=[default_catalog.default_database.kafka], fields=[physical_1, 
physical_2, physical_3, headers, timestamp]) (1/1)#0] INFO  
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Starting 
FlinkKafkaInternalProducer (1/1) to produce into default topic 
metadata_topic_csv
2020-12-10T23:10:47.5157133Z 23:10:47,513 [Source: 
TableSourceScan(table=[[default_catalog, default_database, kafka]], 
fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
Sink: Select table sink (1/1)#0] INFO  
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
Consumer subtask 0 has no restore state.
2020-12-10T23:10:47.5233388Z 23:10:47,521 [Source: 
TableSourceScan(table=[[default_catalog, default_database, kafka]], 
fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
Sink: Select table sink (1/1)#0] INFO  
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
Consumer subtask 0 will start reading the following 1 partitions from the 
earliest offsets: [KafkaTopicPartition{topic='metadata_topic_csv', partition=0}]
2020-12-10T23:10:47.5387239Z 23:10:47,537 [Legacy Source Thread - Source: 
TableSourceScan(table=[[default_catalog, default_database, kafka]], 
fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
Sink: Select table sink (1/1)#0] INFO  
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
Consumer subtask 0 creating fetcher with offsets 
{KafkaTopicPartition{topic='metadata_topic_csv', partition=0}=-915623761775}.
2020-12-11T02:34:02.6860452Z ##[error]The operation was canceled.
{code}
This test started at 2020-12-10T23:10:46.7788275Z and has not been finished at 
2020-12-11T02:34:02.6860452Z



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14363: [hotfix] fix typo in upsert-kafka docs

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14363:
URL: https://github.com/apache/flink/pull/14363#issuecomment-742929862


   
   ## CI report:
   
   * 6e07217855efaee26691fd8c1d15d0a0a7650e02 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10787)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14362: [FLINK-20540] Failed connecting to jdbc:postgresql://flink-postgres

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14362:
URL: https://github.com/apache/flink/pull/14362#issuecomment-742906429


   
   ## CI report:
   
   * 007bdcc0807864eda87a99c32eebd9b7f96b613c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10784)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19146) createMiniBatchTrigger() use OR ,table.exec.mini-batch.size and table.exec.mini-batch.allow-latency

2020-12-10 Thread badqiu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247607#comment-17247607
 ] 

badqiu commented on FLINK-19146:


This problem is repeated if the output result set is small.

> createMiniBatchTrigger() use OR ,table.exec.mini-batch.size and 
> table.exec.mini-batch.allow-latency 
> 
>
> Key: FLINK-19146
> URL: https://issues.apache.org/jira/browse/FLINK-19146
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.1
>Reporter: badqiu
>Priority: Major
> Attachments: mini_batch_trigger_by_latency.png, 
> mini_batch_trigger_by_size.png
>
>
> Using *or* conditions, you can control the total data delay and improve 
> computing performance.
> Increase the batch size to very large, but the data delay is still within the 
> set range.
>  
>  
> table.exec.mini-batch.size is true
> =>
> (table.exec.mini-batch.size or table.exec.mini-batch.allow-latency) is true
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] liming30 commented on pull request #14356: [FLINK-20554][webui] Corrected the Checkpointed Data Size display of Latest Completed Checkpoint on the Overview page

2020-12-10 Thread GitBox


liming30 commented on pull request #14356:
URL: https://github.com/apache/flink/pull/14356#issuecomment-742931764


   > Thanks for your fix! Could you share a picture of checkpoint overview 
after applying this PR.
   
   Hi, @Myasuka, after applying this PR, **Checkpointed Data Size** will 
display the size of the state (incremental checkpoint will display the 
incremental size) instead of '-'. No other UI pages have changed.
   
![image](https://user-images.githubusercontent.com/6950/101855618-b277cf80-3b9e-11eb-920f-a57b2daaa8dc.png)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14363: [hotfix] fix typo in upsert-kafka docs

2020-12-10 Thread GitBox


flinkbot commented on pull request #14363:
URL: https://github.com/apache/flink/pull/14363#issuecomment-742929862


   
   ## CI report:
   
   * 6e07217855efaee26691fd8c1d15d0a0a7650e02 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14358: [FLINK-20561][docs] Add documentation for `records-lag-max` metric.

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14358:
URL: https://github.com/apache/flink/pull/14358#issuecomment-742572892


   
   ## CI report:
   
   * fc19eace2d684e87045650fa35438b78d7f54199 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10785)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10765)
 
   * 97328897a804b3e088e7f431230b2c5ab1c45cfd Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10786)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14363: [hotfix] fix typo in upsert-kafka docs

2020-12-10 Thread GitBox


flinkbot commented on pull request #14363:
URL: https://github.com/apache/flink/pull/14363#issuecomment-742926490


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6e07217855efaee26691fd8c1d15d0a0a7650e02 (Fri Dec 11 
02:35:48 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuxiaoshang opened a new pull request #14363: [hotfix] fix typo in upsert-kafka docs

2020-12-10 Thread GitBox


zhuxiaoshang opened a new pull request #14363:
URL: https://github.com/apache/flink/pull/14363


   
   ## What is the purpose of the change
   
   *fix typo in upsert-kafka docs*
   
   
   ## Brief change log
   
   fix typo in upsert-kafka docs
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)no
 - The serializers: (yes / no / don't know)no
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)no
 - The S3 file system connector: (yes / no / don't know)no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)no
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20567) Document Error

2020-12-10 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247604#comment-17247604
 ] 

appleyuchi commented on FLINK-20567:


yes

> Document Error
> --
>
> Key: FLINK-20567
> URL: https://issues.apache.org/jira/browse/FLINK-20567
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / Ecosystem
>Reporter: appleyuchi
>Priority: Major
> Attachments: screenshot-1.png
>
>
> ||item||Content||
> |Document|[Link|https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/tableApi.html]|
> |part|Inner Join with Table Function (UDTF)|
> |origin|TableFunction split = new MySplitUDTF();|
> |change to|TableFunction> split = new 
> MySplitUDTF();|
> I have run the following the codes successfully 
> that contain all the contents from the above.
> ①[InnerJoinwithTableFunction.java|https://paste.ubuntu.com/p/MMXJPrfRWC]
> ②[MySplitUDTF.java|https://paste.ubuntu.com/p/Q6fDHxw4Td/]
> Reason:
> In this part, 
> it says:
> joinLateral(call("split", $("c")).as("s", "t", "v"))
> it means:
> the udtf has 1 input "c",
> and 3 outputs "s", "t", "v"
> So:
> these outputs should have 3 types.
> such as TableFunction>
> instead of only 
> -TableFunction split-
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14358: [FLINK-20561][docs] Add documentation for `records-lag-max` metric.

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14358:
URL: https://github.com/apache/flink/pull/14358#issuecomment-742572892


   
   ## CI report:
   
   * fc19eace2d684e87045650fa35438b78d7f54199 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10785)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10765)
 
   * 97328897a804b3e088e7f431230b2c5ab1c45cfd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14358: [FLINK-20561][docs] Add documentation for `records-lag-max` metric.

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14358:
URL: https://github.com/apache/flink/pull/14358#issuecomment-742572892


   
   ## CI report:
   
   * fc19eace2d684e87045650fa35438b78d7f54199 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10765)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10785)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] V1ncentzzZ commented on pull request #14358: [FLINK-20561][docs] Add documentation for `records-lag-max` metric.

2020-12-10 Thread GitBox


V1ncentzzZ commented on pull request #14358:
URL: https://github.com/apache/flink/pull/14358#issuecomment-742917433


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] V1ncentzzZ commented on pull request #14358: [FLINK-20561][docs] Add documentation for `records-lag-max` metric.

2020-12-10 Thread GitBox


V1ncentzzZ commented on pull request #14358:
URL: https://github.com/apache/flink/pull/14358#issuecomment-742916445


   cc @zentol 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20568) Kerberized YARN per-job on Docker test failed with "Hadoop security with Kerberos is enabled but the login user does not have Kerberos credentials or delegation tokens!"

2020-12-10 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-20568:


 Summary: Kerberized YARN per-job on Docker test failed with 
"Hadoop security with Kerberos is enabled but the login user does not have 
Kerberos credentials or delegation tokens!"
 Key: FLINK-20568
 URL: https://issues.apache.org/jira/browse/FLINK-20568
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN
Affects Versions: 1.12.0, 1.11.0, 1.13.0
Reporter: Huang Xingbo


Instance on 1.11 branch

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10780=logs=08866332-78f7-59e4-4f7e-49a56faa3179=3e8647c1-5a28-5917-dd93-bf78594ea994]
{code:java}
2020-12-10T22:38:25.1087443Z  The program finished with the following exception:
2020-12-10T22:38:25.1087688Z 
2020-12-10T22:38:25.1088094Z 
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: Could not deploy Yarn job cluster.
2020-12-10T22:38:25.1088717Zat 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
2020-12-10T22:38:25.1089321Zat 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
2020-12-10T22:38:25.1090233Zat 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
2020-12-10T22:38:25.1090749Zat 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699)
2020-12-10T22:38:25.1091233Zat 
org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232)
2020-12-10T22:38:25.1091705Zat 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
2020-12-10T22:38:25.1092225Zat 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992)
2020-12-10T22:38:25.1095464Zat 
java.security.AccessController.doPrivileged(Native Method)
2020-12-10T22:38:25.1095961Zat 
javax.security.auth.Subject.doAs(Subject.java:422)
2020-12-10T22:38:25.1096436Zat 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840)
2020-12-10T22:38:25.1097027Zat 
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
2020-12-10T22:38:25.1097859Zat 
org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992)
2020-12-10T22:38:25.1098474Z Caused by: 
org.apache.flink.client.deployment.ClusterDeploymentException: Could not deploy 
Yarn job cluster.
2020-12-10T22:38:25.1099065Zat 
org.apache.flink.yarn.YarnClusterDescriptor.deployJobCluster(YarnClusterDescriptor.java:431)
2020-12-10T22:38:25.1099674Zat 
org.apache.flink.client.deployment.executors.AbstractJobClusterExecutor.execute(AbstractJobClusterExecutor.java:70)
2020-12-10T22:38:25.1100918Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1818)
2020-12-10T22:38:25.1101607Zat 
org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:128)
2020-12-10T22:38:25.1102202Zat 
org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:76)
2020-12-10T22:38:25.1102840Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1700)
2020-12-10T22:38:25.1103467Zat 
org.apache.flink.streaming.examples.wordcount.WordCount.main(WordCount.java:96)
2020-12-10T22:38:25.1104174Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-12-10T22:38:25.1104638Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-12-10T22:38:25.1105174Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-12-10T22:38:25.1105645Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-12-10T22:38:25.1106119Zat 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
2020-12-10T22:38:25.1106495Z... 11 more
2020-12-10T22:38:25.1106940Z Caused by: java.lang.RuntimeException: Hadoop 
security with Kerberos is enabled but the login user does not have Kerberos 
credentials or delegation tokens!
2020-12-10T22:38:25.1107584Zat 
org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:475)
2020-12-10T22:38:25.1108484Zat 
org.apache.flink.yarn.YarnClusterDescriptor.deployJobCluster(YarnClusterDescriptor.java:424)
2020-12-10T22:38:25.1109201Z... 22 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20561) Add documentation for `records-lag-max` metric.

2020-12-10 Thread xiaozilong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247139#comment-17247139
 ] 

xiaozilong edited comment on FLINK-20561 at 12/11/20, 1:53 AM:
---

We can type `flink_taskmanager_job_task_operator_KafkaConsumer_records_lag_max` 
for query in prometheus or grafana.
  


was (Author: xiaozilong):
We can type `flink_taskmanager_job_task_operator_KafkaConsumer_records_lag_max` 
for query.
 

> Add documentation for `records-lag-max` metric. 
> 
>
> Key: FLINK-20561
> URL: https://issues.apache.org/jira/browse/FLINK-20561
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.11.0, 1.12.0
>Reporter: xiaozilong
>Priority: Major
>  Labels: pull-request-available
>
> Currently, there are no metric description for kafka topic's lag in f[link 
> metrics 
> docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/monitoring/metrics.html#connectors].
>  But this metric was reported in flink actually. So we should add some docs 
> to guide the users to use it.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-10 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247596#comment-17247596
 ] 

jiawen xiao commented on FLINK-20386:
-

I will create a PR to fix

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14362: [FLINK-20540] Failed connecting to jdbc:postgresql://flink-postgres

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14362:
URL: https://github.com/apache/flink/pull/14362#issuecomment-742906429


   
   ## CI report:
   
   * 007bdcc0807864eda87a99c32eebd9b7f96b613c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10784)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14362: [FLINK-20540] Failed connecting to jdbc:postgresql://flink-postgres

2020-12-10 Thread GitBox


flinkbot commented on pull request #14362:
URL: https://github.com/apache/flink/pull/14362#issuecomment-742906429


   
   ## CI report:
   
   * 007bdcc0807864eda87a99c32eebd9b7f96b613c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14361: [FLINK-19435][connectors/jdbc] Fix deadlock when loading different driver classes concurrently using Class.forName

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14361:
URL: https://github.com/apache/flink/pull/14361#issuecomment-742899629


   
   ## CI report:
   
   * fc55e236955b84396bed5f18851cd9dd0425060a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10783)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19435) jdbc JDBCOutputFormat open function invoke Class.forName(drivername)

2020-12-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19435:
---
Labels: pull-request-available  (was: )

> jdbc JDBCOutputFormat open function invoke Class.forName(drivername)
> 
>
> Key: FLINK-19435
> URL: https://issues.apache.org/jira/browse/FLINK-19435
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.2
>Reporter: xiaodao
>Assignee: Kezhu Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.12.1
>
> Attachments: image-2020-10-09-20-48-48-261.png, 
> image-2020-10-09-20-49-23-644.png
>
>
> when we sink data to multi jdbc outputformat , 
> {code}
> protected void establishConnection() throws SQLException, 
> ClassNotFoundException {
>  Class.forName(drivername);
>  if (username == null) {
>  connection = DriverManager.getConnection(dbURL);
>  } else {
>  connection = DriverManager.getConnection(dbURL, username, password);
>  }
> }
> {code}
> may cause jdbc driver deadlock. it need to change to synchronized function.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kezhuw commented on pull request #14361: [FLINK-19435][connectors/jdbc] Fix deadlock when loading different driver classes concurrently using Class.forName

2020-12-10 Thread GitBox


kezhuw commented on pull request #14361:
URL: https://github.com/apache/flink/pull/14361#issuecomment-742904371


   @flinkbot 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14362: [FLINK-20540] Failed connecting to jdbc:postgresql://flink-postgres

2020-12-10 Thread GitBox


flinkbot commented on pull request #14362:
URL: https://github.com/apache/flink/pull/14362#issuecomment-742902114


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 007bdcc0807864eda87a99c32eebd9b7f96b613c (Fri Dec 11 
01:17:29 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20540) Failed connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC

2020-12-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20540:
---
Labels: pull-request-available  (was: )

> Failed connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via 
> JDBC
> --
>
> Key: FLINK-20540
> URL: https://issues.apache.org/jira/browse/FLINK-20540
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.12.0, 1.11.1
>Reporter: zhangzhao
>Assignee: zhangzhao
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>  
> {code:java}
> //代码占位符
> import org.apache.flink.connector.jdbc.catalog.JdbcCatalog
> new JdbcCatalog(name, defaultDatabase, username, password, baseUrl){code}
>  
> The baseUrl must be endswith / when instantiate JdbcCatalog.
> But according to [Flink 
> document|https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/jdbc.html#usage-of-postgrescatalog]
>  and code comments, baseUrl should be support  format 
> {{"jdbc:postgresql://:"}}
>  
> When i use baseUrl "{{jdbc:postgresql://:}}", the error stack is:
> {code:java}
> //代码占位符
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)\\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute application.
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\\n\\t...
>  7 more\\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\\n\\t...
>  7 more\\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed connecting to 
> jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\\n\\t...
>  10 more\\nCaused by: org.apache.flink.table.api.ValidationException: Failed 
> connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog.open(AbstractJdbcCatalog.java:100)
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:191)
> org.apache.flink.table.api.internal.TableEnvImpl.registerCatalog(TableEnvImpl.scala:267)
> com.upai.jobs.TableBodySentFields.registerCatalog(TableBodySentFields.scala:25)
> com.upai.jobs.FusionGifShow$.run(FusionGifShow.scala:28)
> com.upai.jobs.FlinkTask$.delayedEndpoint$com$upai$jobs$FlinkTask$1(FlinkTask.scala:41)
> com.upai.jobs.FlinkTask$delayedInit$body.apply(FlinkTask.scala:11)
> scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
> scala.App$$anonfun$main$1.apply(App.scala:76)
> scala.App$$anonfun$main$1.apply(App.scala:76)
> 

[GitHub] [flink] kougazhang opened a new pull request #14362: [FLINK-20540] Failed connecting to jdbc:postgresql://flink-postgres

2020-12-10 Thread GitBox


kougazhang opened a new pull request #14362:
URL: https://github.com/apache/flink/pull/14362


   …dn-flink:5432flink via JDBC
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] kezhuw commented on pull request #14361: Flink 19435 deadlock while establish different jdbc connection concurrently

2020-12-10 Thread GitBox


kezhuw commented on pull request #14361:
URL: https://github.com/apache/flink/pull/14361#issuecomment-742900680


   @flinkbot attention @wuchong @JingsongLi 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14361: Flink 19435 deadlock while establish different jdbc connection concurrently

2020-12-10 Thread GitBox


flinkbot commented on pull request #14361:
URL: https://github.com/apache/flink/pull/14361#issuecomment-742899629


   
   ## CI report:
   
   * fc55e236955b84396bed5f18851cd9dd0425060a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14361: Flink 19435 deadlock while establish different jdbc connection concurrently

2020-12-10 Thread GitBox


flinkbot commented on pull request #14361:
URL: https://github.com/apache/flink/pull/14361#issuecomment-742895133


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit fc55e236955b84396bed5f18851cd9dd0425060a (Fri Dec 11 
00:54:45 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] kezhuw opened a new pull request #14361: Flink 19435 deadlock while establish different jdbc connection concurrently

2020-12-10 Thread GitBox


kezhuw opened a new pull request #14361:
URL: https://github.com/apache/flink/pull/14361


   ## What is the purpose of the change
   
   Fix deadlock when loading different driver classes concurrently using 
`Class.forName`.
   
   ## Brief change log
   - Add hang test case to reveal deadlock when loading different sql driver 
classes concurrently using Class.forName.
   - Fix deadlock when loading different sql driver classes concurrently using 
Class.forName.
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   - Add test 
`SimpleJdbcConnectionProviderDriverClassConcurrentLoadingTest#testDriverClassConcurrentLoading`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14341: [FLINK-20496][state backends] RocksDB partitioned index/filters option.

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14341:
URL: https://github.com/apache/flink/pull/14341#issuecomment-740887981


   
   ## CI report:
   
   * 01903f147f2a66dc1bb51359e15f4eb9714b3129 UNKNOWN
   * 9cc5abebea8b35545b206504e5590e395452ef5e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10776)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14359: [FLINK-20521][rpc] Add support for sending null responses

2020-12-10 Thread GitBox


flinkbot edited a comment on pull request #14359:
URL: https://github.com/apache/flink/pull/14359#issuecomment-742715028


   
   ## CI report:
   
   * bb07e40930830e8e0ec15177b2454adf7ec8876e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10774)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #13551: [FLINK-19520][configuration] Add randomization of checkpoint config.

2020-12-10 Thread GitBox


zentol commented on a change in pull request #13551:
URL: https://github.com/apache/flink/pull/13551#discussion_r540530506



##
File path: 
flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/streaming/util/TestStreamEnvironment.java
##
@@ -44,6 +47,12 @@ public TestStreamEnvironment(
null);
 
setParallelism(parallelism);
+
+   if (Randomization) {
+   final String testName = 
TestNameProvider.getCurrentTestName();

Review comment:
   This seems a bit...janky? Why can we not mutate the configuration within 
the MiniClusterResource?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #13551: [FLINK-19520][configuration] Add randomization of checkpoint config.

2020-12-10 Thread GitBox


zentol commented on a change in pull request #13551:
URL: https://github.com/apache/flink/pull/13551#discussion_r540528885



##
File path: 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/util/TestLogger.java
##
@@ -67,6 +67,9 @@ public void failed(Throwable e, Description description) {
}
};
 
+   @Rule
+   public TestRule nameProvider = new TestNameProvider();

Review comment:
   ```suggestion
public final TestRule nameProvider = new TestNameProvider();
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >