[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   * c7941dda940e9593b6c9c68baac3ede726485d3e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24926)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   * 538563426a8a8870b4bf26c020101ad69a3d86c7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24921)
 
   * c7941dda940e9593b6c9c68baac3ede726485d3e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24926)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-9465) Specify a separate savepoint timeout option via CLI

2021-10-09 Thread Feifan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426559#comment-17426559
 ] 

Feifan Wang edited comment on FLINK-9465 at 10/10/21, 2:29 AM:
---

Hi [~trohrmann], I open a [pull 
request|https://github.com/apache/flink/pull/17443] to resolve this, but there 
are still some unit test that I think need to be complete. Can you take a 
glance over this PR and give me some guidance on the unit test ?


was (Author: feifan wang):
Hi [~trohrmann], I open a pull request to resolve this, but there are still 
some unit test that I think need to be complete. Can you take a glance over 
this PR and give me some guidance on the unit test ?

> Specify a separate savepoint timeout option via CLI
> ---
>
> Key: FLINK-9465
> URL: https://issues.apache.org/jira/browse/FLINK-9465
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.5.0
>Reporter: Truong Duc Kien
>Assignee: Feifan Wang
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Savepoint can take much longer time to perform than checkpoint, especially 
> with incremental checkpoint enabled. This leads to a couple of troubles:
>  * For our job, we currently have to set the checkpoint timeout much large 
> than necessary, otherwise we would be unable to perform savepoint. 
>  * During rush hour, our cluster would encounter high rate of checkpoint 
> timeout due to backpressure, however we're unable to migrate to a larger 
> configuration, because savepoint also timeout.
> In my opinion, the timeout for savepoint should be configurable separately, 
> both in the config file and as parameter to the savepoint command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   * 538563426a8a8870b4bf26c020101ad69a3d86c7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24921)
 
   * c7941dda940e9593b6c9c68baac3ede726485d3e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   * 538563426a8a8870b4bf26c020101ad69a3d86c7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24921)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rionmonster commented on pull request #17061: [FLINK-23977][elasticsearch] Added DynamicElasticsearchSink for Dynamic ES Cluster Routing

2021-10-09 Thread GitBox


rionmonster commented on pull request #17061:
URL: https://github.com/apache/flink/pull/17061#issuecomment-939351275


   @fapaul / @AHeise 
   
   Thanks both for the feedback. I've gone ahead and closed the previous ticket 
and will abandon this pull request after hearing from one/both of you. I've 
gone ahead and created [this 
JIRA](https://issues.apache.org/jira/browse/FLINK-24493) to track that work - 
if one of you are capable of assigning that to me, I'd appreciate it and will 
start working on it when I have some time.
   
   Additionally, I'm in agreement on the `DemultiplexingSink` as far as naming 
goes. And I'm probably leaning towards a similar approach to the one in this 
pull request (i.e. a generic `DemultiplexingSink` and a router interface that 
would support all of the various routing and configuration using the new 
SinkFunction interface).
   
   Any additional thoughts or brain-cells towards that would be appreciated!
   
   Thanks,
   
   Rion


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23977) Add Support for Dynamic Elastic Cluster/Index Routing

2021-10-09 Thread Rion Williams (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426696#comment-17426696
 ] 

Rion Williams commented on FLINK-23977:
---

Closing per the discussion on the related pull request after some discussion 
with [~arvid] and [~fabian.paul]; Creating FLINK-24493 to handle that work. If 
someone can assign it to me, I'll start working on it when I have some time.

> Add Support for Dynamic Elastic Cluster/Index Routing
> -
>
> Key: FLINK-23977
> URL: https://issues.apache.org/jira/browse/FLINK-23977
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Rion Williams
>Assignee: Rion Williams
>Priority: Major
>  Labels: elasticsearch, pull-request-available
>  Time Spent: 16h
>  Remaining Estimate: 0h
>
> Currently only index-level dynamic routing is supported within the 
> Elasticsearch connector, which can be problematic if information such as the 
> host of the cluster (e.g. HttpHosts) is not known until run-time. 
> The idea behind this ticket would be to implement a wrapper that would allow 
> Elasticsearch sink instances to be created at run time and stored within an 
> in-memory cache (i.e. HashMap) using a key-value association where the key 
> represents a deterministically unique "route" and the value is the sink to 
> which all items with a given key are written.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-23977) Add Support for Dynamic Elastic Cluster/Index Routing

2021-10-09 Thread Rion Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rion Williams closed FLINK-23977.
-
Resolution: Won't Do

> Add Support for Dynamic Elastic Cluster/Index Routing
> -
>
> Key: FLINK-23977
> URL: https://issues.apache.org/jira/browse/FLINK-23977
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Rion Williams
>Assignee: Rion Williams
>Priority: Major
>  Labels: elasticsearch, pull-request-available
>  Time Spent: 16h
>  Remaining Estimate: 0h
>
> Currently only index-level dynamic routing is supported within the 
> Elasticsearch connector, which can be problematic if information such as the 
> host of the cluster (e.g. HttpHosts) is not known until run-time. 
> The idea behind this ticket would be to implement a wrapper that would allow 
> Elasticsearch sink instances to be created at run time and stored within an 
> in-memory cache (i.e. HashMap) using a key-value association where the key 
> represents a deterministically unique "route" and the value is the sink to 
> which all items with a given key are written.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17416: [FLINK-24459] Performance improvement of file sink

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17416:
URL: https://github.com/apache/flink/pull/17416#issuecomment-935728533


   
   ## CI report:
   
   * 4c928351e86f9dbe5aee32b00c90ea6250d78ee9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24920)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24493) Introduce DemultiplexingSink to Support Dynamic Sink Routing

2021-10-09 Thread Rion Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rion Williams updated FLINK-24493:
--
Summary: Introduce DemultiplexingSink to Support Dynamic Sink Routing  
(was: Introduce DemultiplexingSink)

> Introduce DemultiplexingSink to Support Dynamic Sink Routing
> 
>
> Key: FLINK-24493
> URL: https://issues.apache.org/jira/browse/FLINK-24493
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Rion Williams
>Priority: Major
>
> Recently, FLINK-23977 attempted to introduce an approach to supporting 
> dynamic routing for the Elasticsearch sink, however during some discussion 
> within [the pull request|https://github.com/apache/flink/pull/17061], the 
> idea was introduced to create a more generic approach.
> The idea being that we could introduce a common DemultiplexingSink and 
> related interface for handling routing to any number of existing sinks 
> similar to the implementation mentioned in FLINK-23977 to the common 
> connectors directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24493) Introduce DemultiplexingSink

2021-10-09 Thread Rion Williams (Jira)
Rion Williams created FLINK-24493:
-

 Summary: Introduce DemultiplexingSink
 Key: FLINK-24493
 URL: https://issues.apache.org/jira/browse/FLINK-24493
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Common
Reporter: Rion Williams


Recently, FLINK-23977 attempted to introduce an approach to supporting dynamic 
routing for the Elasticsearch sink, however during some discussion within [the 
pull request|https://github.com/apache/flink/pull/17061], the idea was 
introduced to create a more generic approach.

The idea being that we could introduce a common DemultiplexingSink and related 
interface for handling routing to any number of existing sinks similar to the 
implementation mentioned in FLINK-23977 to the common connectors directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   * 54a7d0d8f8dad4d7d22db13e405534f95fd32ce3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24894)
 
   * 538563426a8a8870b4bf26c020101ad69a3d86c7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24921)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   * 54a7d0d8f8dad4d7d22db13e405534f95fd32ce3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24894)
 
   * 538563426a8a8870b4bf26c020101ad69a3d86c7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   * 54a7d0d8f8dad4d7d22db13e405534f95fd32ce3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24894)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   * 54a7d0d8f8dad4d7d22db13e405534f95fd32ce3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24894)
 
   * 538563426a8a8870b4bf26c020101ad69a3d86c7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17443: [FLINK-9465][Runtime/Checkpointing] Specify a separate savepoint timeout option via CLI and REST API

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17443:
URL: https://github.com/apache/flink/pull/17443#issuecomment-939267795


   
   ## CI report:
   
   * e63def36ef5f877d6d3347ebfe0927e34c58087c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24917)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17444: [FLINK-24492][table-planner]incorrect implicit type conversion between numeric and (var)char

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17444:
URL: https://github.com/apache/flink/pull/17444#issuecomment-939291136


   
   ## CI report:
   
   * 7837a03f2745e5d67912fda5317bf755c0a62d8f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24914)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17416: [FLINK-24459] Performance improvement of file sink

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17416:
URL: https://github.com/apache/flink/pull/17416#issuecomment-935728533


   
   ## CI report:
   
   * 0807e52c248da31b34470447184d5d7ebbff189b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24789)
 
   * 4c928351e86f9dbe5aee32b00c90ea6250d78ee9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24920)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17416: [FLINK-24459] Performance improvement of file sink

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17416:
URL: https://github.com/apache/flink/pull/17416#issuecomment-935728533


   
   ## CI report:
   
   * 0807e52c248da31b34470447184d5d7ebbff189b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24789)
 
   * 4c928351e86f9dbe5aee32b00c90ea6250d78ee9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] trushev commented on a change in pull request #17416: [FLINK-24459] Performance improvement of file sink

2021-10-09 Thread GitBox


trushev commented on a change in pull request #17416:
URL: https://github.com/apache/flink/pull/17416#discussion_r725498056



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/PartitionPathUtils.java
##
@@ -113,19 +113,36 @@ private static String escapePathName(String path) {
 throw new TableException("Path should not be null or empty: " + 
path);
 }
 
-StringBuilder sb = new StringBuilder();
+StringBuilder sb = null;
 for (int i = 0; i < path.length(); i++) {
 char c = path.charAt(i);
 if (needsEscaping(c)) {
-sb.append('%');
-sb.append(String.format("%1$02X", (int) c));
-} else {
+if (sb == null) {
+sb = new StringBuilder(path.length() + 2);
+for (int j = 0; j < i; j++) {
+sb.append(path.charAt(j));
+}
+}
+escapeChar(c, sb);
+} else if (sb != null) {
 sb.append(c);
 }
 }

Review comment:
   It works correctly. I added several units that cover the scenarios of 
head, middle, tail, missing, and combined control characters




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24389) getDescription() in `CatalogTableImpl` should check null

2021-10-09 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-24389.
---
Fix Version/s: 1.15.0
   Resolution: Fixed

Fixed in master: 2d38d8bd72d9b80227071dde77dcad688d78a47f

> getDescription() in `CatalogTableImpl` should check null
> 
>
> Key: FLINK-24389
> URL: https://issues.apache.org/jira/browse/FLINK-24389
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Neng Lu
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: pull-request-available, starter
> Fix For: 1.15.0
>
>
> ```
> @Override
> public Optional getDescription() {
>     return Optional.of(getComment());
> }
> ```
> If the table comment is not set, then `getDescription` will throw 
> NullPointerException 
> [https://github.com/apache/flink/blame/5b9e7882207357120717966d8bf7efd53c53ede5/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogTableImpl.java#L69]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #17438: [FLINK-24389] [Table SQL / API] Fix NPE in CatalogTableImpl#getDescription

2021-10-09 Thread GitBox


wuchong merged pull request #17438:
URL: https://github.com/apache/flink/pull/17438


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17438: [FLINK-24389] [Table SQL / API] Fix NPE in CatalogTableImpl#getDescription

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17438:
URL: https://github.com/apache/flink/pull/17438#issuecomment-938665309


   
   ## CI report:
   
   * 0c66bb628bffb8b866f4840c2170778382b86908 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24905)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17443: [FLINK-9465][Runtime/Checkpointing] Specify a separate savepoint timeout option via CLI and REST API

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17443:
URL: https://github.com/apache/flink/pull/17443#issuecomment-939267795


   
   ## CI report:
   
   * 79df4a732b55a8e7a0179b337776e36f7df5c83e Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24906)
 
   * e63def36ef5f877d6d3347ebfe0927e34c58087c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24917)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17443: [FLINK-9465][Runtime/Checkpointing] Specify a separate savepoint timeout option via CLI and REST API

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17443:
URL: https://github.com/apache/flink/pull/17443#issuecomment-939267795


   
   ## CI report:
   
   * 79df4a732b55a8e7a0179b337776e36f7df5c83e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24906)
 
   * e63def36ef5f877d6d3347ebfe0927e34c58087c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17444: [FLINK-24492][table-planner]incorrect implicit type conversion between numeric and (var)char

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17444:
URL: https://github.com/apache/flink/pull/17444#issuecomment-939291136


   
   ## CI report:
   
   * 7837a03f2745e5d67912fda5317bf755c0a62d8f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24914)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24291) Decimal precision is lost when deserializing in test cases

2021-10-09 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-24291.
--
Resolution: Fixed

Fixed in 1.15.0: 4d69f7f844725534b66e8cc9f5e64d6d10226055
Fixed in 1.14.1: 3e0c2a12d2d270162c3d8a4a77451020926293a0
Fixed in 1.13.3: 87a3e2e227b0acf741f93dd14133f0e7ba6dbfef

> Decimal precision is lost when deserializing in test cases
> --
>
> Key: FLINK-24291
> URL: https://issues.apache.org/jira/browse/FLINK-24291
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyangzhong
>Assignee: xuyangzhong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.13.3, 1.15.0, 1.14.1
>
>
> When added the test case following into FileSystemItCaseBase:
> {code:java}
> // create table
> tableEnv.executeSql(
>   s"""
>  |create table test2 (
>  |  c0 decimal(10,0), c1 int
>  |) with (
>  |  'connector' = 'filesystem',
>  |  'path' = '/Users/zhongxuyang/test/test',
>  |  'format' = 'testcsv'
>  |)
>""".stripMargin
> )
> //test file content is:
> //2113554011,1
> //2113554022,2
> {code}
> and
> {code:java}
> // select sql
> @Test
> def myTest2(): Unit={
>   check(
> "SELECT c0 FROM test2",
> Seq(
>   row(2113554011),
>   row(2113554022)
> ))
> }
> {code}
> i got an exception :
> {code}
> java.lang.RuntimeException: Failed to fetch next 
> resultjava.lang.RuntimeException: Failed to fetch next result
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
>  at java.util.Iterator.forEachRemaining(Iterator.java:115) at 
> org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109) 
> at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:106)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.check(BatchFileSystemITCaseBase.scala:46)
>  at 
> org.apache.flink.table.planner.runtime.FileSystemITCaseBase$class.myTest2(FileSystemITCaseBase.scala:128)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.myTest2(BatchFileSystemITCaseBase.scala:33)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> 

[GitHub] [flink] godfreyhe commented on a change in pull request #17311: [FLINK-24318][table-planner]Casting a number to boolean has different results between 'select' fields and 'where' condition

2021-10-09 Thread GitBox


godfreyhe commented on a change in pull request #17311:
URL: https://github.com/apache/flink/pull/17311#discussion_r725485628



##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/CalcITCase.scala
##
@@ -50,6 +51,146 @@ class CalcITCase extends StreamingTestBase {
   @Rule
   def usesLegacyRows: LegacyRowResource = LegacyRowResource.INSTANCE
 
+  @Test
+  def testCastIntegerToBooleanTrueInProjection(): Unit ={
+val sqlQuery = "SELECT CAST(1 AS BOOLEAN)"
+
+val outputType = InternalTypeInfo.ofFields(
+  new BooleanType())
+
+val result = tEnv.sqlQuery(sqlQuery).toAppendStream[RowData]
+val sink = new TestingAppendRowDataSink(outputType)
+result.addSink(sink)
+env.execute()
+
+val expected = List("+I(true)")
+assertEquals(expected.sorted, sink.getAppendResults.sorted)
+  }
+
+  @Test
+  def testCastIntegerToBooleanFalseInProjection(): Unit ={
+val sqlQuery = "SELECT CAST(0 AS BOOLEAN)"
+
+val outputType = InternalTypeInfo.ofFields(
+  new BooleanType())
+
+val result = tEnv.sqlQuery(sqlQuery).toAppendStream[RowData]
+val sink = new TestingAppendRowDataSink(outputType)
+result.addSink(sink)
+env.execute()
+
+val expected = List("+I(false)")
+assertEquals(expected.sorted, sink.getAppendResults.sorted)
+  }
+
+  @Test
+  def testCastDecimalToBooleanTrueInProjection(): Unit ={
+val sqlQuery = "SELECT CAST(1.1 AS BOOLEAN)"
+
+val outputType = InternalTypeInfo.ofFields(
+  new BooleanType())
+
+val result = tEnv.sqlQuery(sqlQuery).toAppendStream[RowData]
+val sink = new TestingAppendRowDataSink(outputType)
+result.addSink(sink)
+env.execute()
+
+val expected = List("+I(true)")
+assertEquals(expected.sorted, sink.getAppendResults.sorted)
+  }
+
+  @Test
+  def testCastDecimalToBooleanFalseInProjection(): Unit ={

Review comment:
   the above four tests can be merged into one test: SELECT CAST(0 AS 
BOOLEAN), CAST(1 AS BOOLEAN), CAST(1.1 AS BOOLEAN), CAST(0.00 AS BOOLEAN)
   
   which is more efficient
   

##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/plan/utils/FlinkRexUtilTest.scala
##
@@ -521,4 +501,12 @@ class FlinkRexUtilTest {
 val expressionReducer = new ExpressionReducer(TableConfig.getDefault, 
false)
 FlinkRexUtil.simplify(rexBuilder, expr, expressionReducer)
   }
+
+  def makeToBooleanCast(fromData: RexNode): RexNode ={

Review comment:
   mark this method as 'private'

##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/CalcITCase.scala
##
@@ -50,6 +51,146 @@ class CalcITCase extends StreamingTestBase {
   @Rule
   def usesLegacyRows: LegacyRowResource = LegacyRowResource.INSTANCE
 
+  @Test
+  def testCastIntegerToBooleanTrueInProjection(): Unit ={
+val sqlQuery = "SELECT CAST(1 AS BOOLEAN)"
+
+val outputType = InternalTypeInfo.ofFields(
+  new BooleanType())
+
+val result = tEnv.sqlQuery(sqlQuery).toAppendStream[RowData]
+val sink = new TestingAppendRowDataSink(outputType)
+result.addSink(sink)
+env.execute()
+
+val expected = List("+I(true)")
+assertEquals(expected.sorted, sink.getAppendResults.sorted)
+  }
+
+  @Test
+  def testCastIntegerToBooleanFalseInProjection(): Unit ={
+val sqlQuery = "SELECT CAST(0 AS BOOLEAN)"
+
+val outputType = InternalTypeInfo.ofFields(
+  new BooleanType())
+
+val result = tEnv.sqlQuery(sqlQuery).toAppendStream[RowData]
+val sink = new TestingAppendRowDataSink(outputType)
+result.addSink(sink)
+env.execute()
+
+val expected = List("+I(false)")
+assertEquals(expected.sorted, sink.getAppendResults.sorted)
+  }
+
+  @Test
+  def testCastDecimalToBooleanTrueInProjection(): Unit ={
+val sqlQuery = "SELECT CAST(1.1 AS BOOLEAN)"
+
+val outputType = InternalTypeInfo.ofFields(
+  new BooleanType())
+
+val result = tEnv.sqlQuery(sqlQuery).toAppendStream[RowData]
+val sink = new TestingAppendRowDataSink(outputType)
+result.addSink(sink)
+env.execute()
+
+val expected = List("+I(true)")
+assertEquals(expected.sorted, sink.getAppendResults.sorted)
+  }
+
+  @Test
+  def testCastDecimalToBooleanFalseInProjection(): Unit ={
+val sqlQuery = "SELECT CAST(0.00 AS BOOLEAN)"
+
+val outputType = InternalTypeInfo.ofFields(
+  new BooleanType())
+
+val result = tEnv.sqlQuery(sqlQuery).toAppendStream[RowData]
+val sink = new TestingAppendRowDataSink(outputType)
+result.addSink(sink)
+env.execute()
+
+val expected = List("+I(false)")
+assertEquals(expected.sorted, sink.getAppendResults.sorted)
+  }
+
+
+  @Test
+  def testCastIntegerToBooleanTrueInCondition(): Unit ={
+val sqlQuery = "SELECT * FROM MyTableRow WHERE b = CAST(1 AS BOOLEAN)"
+
+val rowData1: GenericRowData = new GenericRowData(2)
+

[GitHub] [flink] flinkbot edited a comment on pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16606:
URL: https://github.com/apache/flink/pull/16606#issuecomment-887431748


   
   ## CI report:
   
   * b78388d1ba483c09c28f18c22f8930f6e1484779 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24911)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17444: [FLINK-24492][table-planner]incorrect implicit type conversion between numeric and (var)char

2021-10-09 Thread GitBox


flinkbot commented on pull request #17444:
URL: https://github.com/apache/flink/pull/17444#issuecomment-939291136


   
   ## CI report:
   
   * 7837a03f2745e5d67912fda5317bf755c0a62d8f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe closed pull request #17308: [FLINK-24291][table-planner]Decimal precision is lost when deserializing in test cases

2021-10-09 Thread GitBox


godfreyhe closed pull request #17308:
URL: https://github.com/apache/flink/pull/17308


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16606:
URL: https://github.com/apache/flink/pull/16606#issuecomment-887431748


   
   ## CI report:
   
   * 3d32e902cee493a984bc052b76dfec984743921f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24691)
 
   * b78388d1ba483c09c28f18c22f8930f6e1484779 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24911)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17444: [FLINK-24492][table-planner]incorrect implicit type conversion between numeric and (var)char

2021-10-09 Thread GitBox


flinkbot commented on pull request #17444:
URL: https://github.com/apache/flink/pull/17444#issuecomment-939286691


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7837a03f2745e5d67912fda5317bf755c0a62d8f (Sat Oct 09 
12:08:56 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-24492).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xuyangzhong opened a new pull request #17444: [FLINK-24492]incorrect implicit type conversion between numeric and (var)char

2021-10-09 Thread GitBox


xuyangzhong opened a new pull request #17444:
URL: https://github.com/apache/flink/pull/17444


   ## What is the purpose of the change
   
   Before, the result of the sql "select 1 = '1'" is false, which is wrong.  
And when "1 = '1'" is in join conditions, it will throw an exception, instead 
of returning a true or false result. 
   
   "=" should have the same behavior with ">" and "<", which have the correct 
results. So before calcite solves this bug or Flink supports this kind of 
implicit type conversion, we'd better temporarily forbidding this implicit type 
conversion in "=" and "<>" and all throwing a clear exception, instead of 
giving a wrong result.
   
   ## Brief change log
   
 - In the codegen function, check the validity of implicit type conversion 
to avoid conversion between numeric and (var)char.
 - According to flip-154, add many test cases to test implicit type 
conversion now supported.
 - Change the existing test cases
   
   
   ## Verifying this change
   
   This change added tests to verify this change.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24492) incorrect implicit type conversion between numeric and (var)char

2021-10-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24492:
---
Labels: pull-request-available  (was: )

> incorrect implicit type conversion between numeric and (var)char
> 
>
> Key: FLINK-24492
> URL: https://issues.apache.org/jira/browse/FLINK-24492
> Project: Flink
>  Issue Type: Bug
>Reporter: xuyangzhong
>Priority: Minor
>  Labels: pull-request-available
>
> The result of the sql "select 1 = '1'" is false. This is caused by the 
> CodeGen. CodeGen  incorrectly transform this "=" to "BinaryStringData.equals 
> (int 1)". And "<>" has the same wrong result.
> In my opinion, "=" should have the same behavior with ">" and "<", which have 
> the correct results. So before calcite solves this bug or flink supports this 
> kind of implicit type conversion, we'd better temporarily forbidding this 
> implicit type conversion in "=" and "<>".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-22954) Don't support consuming update and delete changes when use table function that does not contain table field

2021-10-09 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-22954.
--
Fix Version/s: 1.14.1
   1.15.0
   Resolution: Fixed

Fixed in 1.15.0: f83e60387a17bd927004891fe6dcc35dfddf0488
Fixed in 1.14.1: 91cb0005dea33a96d445ff9ea08fa7668fd5513f
Fixed in 1.13.3: 4250543ab483ccc6f8fb0f16788dece42ce6a4d0

> Don't support consuming update and delete changes when use table function 
> that does not contain table field
> ---
>
> Key: FLINK-22954
> URL: https://issues.apache.org/jira/browse/FLINK-22954
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: hehuiyuan
>Assignee: Wenlong Lyu
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.13.3, 1.15.0, 1.14.1
>
>
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.TableException: Table 
> sink 'default_catalog.default_database.kafkaTableSink' doesn't support 
> consuming update and delete changes which is produced by node 
> Join(joinType=[LeftOuterJoin], where=[true], select=[name, word], 
> leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey])Exception in thread 
> "main" org.apache.flink.table.api.TableException: Table sink 
> 'default_catalog.default_database.kafkaTableSink' doesn't support consuming 
> update and delete changes which is produced by node 
> Join(joinType=[LeftOuterJoin], where=[true], select=[name, word], 
> leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey]) at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.createNewNode(FlinkChangelogModeInferenceProgram.scala:382)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:265)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.org$apache$flink$table$planner$plan$optimize$program$FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$visitChild(FlinkChangelogModeInferenceProgram.scala:341)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$anonfun$3.apply(FlinkChangelogModeInferenceProgram.scala:330)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$anonfun$3.apply(FlinkChangelogModeInferenceProgram.scala:329)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.immutable.Range.foreach(Range.scala:160) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChildren(FlinkChangelogModeInferenceProgram.scala:329)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:279)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.org$apache$flink$table$planner$plan$optimize$program$FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$visitChild(FlinkChangelogModeInferenceProgram.scala:341)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$anonfun$3.apply(FlinkChangelogModeInferenceProgram.scala:330)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$anonfun$3.apply(FlinkChangelogModeInferenceProgram.scala:329)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.immutable.Range.foreach(Range.scala:160) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChildren(FlinkChangelogModeInferenceProgram.scala:329)
>  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16606:
URL: https://github.com/apache/flink/pull/16606#issuecomment-887431748


   
   ## CI report:
   
   * 3d32e902cee493a984bc052b76dfec984743921f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24691)
 
   * b78388d1ba483c09c28f18c22f8930f6e1484779 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal edited a comment on pull request #17352: [FLINK-10230][table] Support 'SHOW CREATE VIEW' syntax to print the query of a view

2021-10-09 Thread GitBox


RocMarshal edited a comment on pull request #17352:
URL: https://github.com/apache/flink/pull/17352#issuecomment-932249100


   Thanks @Airblader for the review. 
   Hi, @wuchong , @xintongsong @twalthr @tisonkun @pnowojski @fsk119 . Could 
you help me to merge it if there are nothing inappropriate ? Thank you.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal edited a comment on pull request #16962: [FLINK-15352][connector-jdbc] Develop MySQLCatalog to connect Flink with MySQL tables and ecosystem.

2021-10-09 Thread GitBox


RocMarshal edited a comment on pull request #16962:
URL: https://github.com/apache/flink/pull/16962#issuecomment-932250048


   Thanks @Airblader @MartijnVisser for the review. 
   Hi, @wuchong , @xintongsong @twalthr @pnowojski @fsk119 . Could you help me 
to merge it if there are nothing inappropriate ? Thank you for your help.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   * 54a7d0d8f8dad4d7d22db13e405534f95fd32ce3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24894)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-10230) Support 'SHOW CREATE VIEW' syntax to print the query of a view

2021-10-09 Thread Roc Marshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425935#comment-17425935
 ] 

Roc Marshal edited comment on FLINK-10230 at 10/9/21, 11:16 AM:


Could you [~jark] [~MartijnVisser]  help me to merge it if there are nothing 
inappropriate ? Thank you.


was (Author: rocmarshal):
Could you [~MartijnVisser]  help me to merge it if there are nothing 
inappropriate ? Thank you.

> Support 'SHOW CREATE VIEW' syntax to print the query of a view
> --
>
> Key: FLINK-10230
> URL: https://issues.apache.org/jira/browse/FLINK-10230
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Client
>Reporter: Timo Walther
>Assignee: Roc Marshal
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.15.0
>
>
> FLINK-10163 added initial support for views in SQL Client. We should add a 
> command that allows for printing the query of a view for debugging. MySQL 
> offers {{SHOW CREATE VIEW}} for this. Hive generalizes this to {{SHOW CREATE 
> TABLE}}. The latter one could be extended to also show information about the 
> used table factories and properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17436: [FLINK-15987]SELECT 1.0e0 / 0.0e0 throws NumberFormatException

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17436:
URL: https://github.com/apache/flink/pull/17436#issuecomment-938641830


   
   ## CI report:
   
   * 5d0f5029884cd1df7cee4b455952122fce5046ed Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24893)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17443: [FLINK-9465][Runtime/Checkpointing] Specify a separate savepoint timeout option via CLI and REST API

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17443:
URL: https://github.com/apache/flink/pull/17443#issuecomment-939267795


   
   ## CI report:
   
   * 79df4a732b55a8e7a0179b337776e36f7df5c83e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24906)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17436: [FLINK-15987]SELECT 1.0e0 / 0.0e0 throws NumberFormatException

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17436:
URL: https://github.com/apache/flink/pull/17436#issuecomment-938641830


   
   ## CI report:
   
   * d14a0793a2628a8c9bd778ccbee95e7419f15346 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24857)
 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24891)
 
   * 5d0f5029884cd1df7cee4b455952122fce5046ed Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24893)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17438: [FLINK-24389] [Table SQL / API] Fix NPE in CatalogTableImpl#getDescription

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17438:
URL: https://github.com/apache/flink/pull/17438#issuecomment-938665309


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * 0c66bb628bffb8b866f4840c2170778382b86908 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24905)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hililiwei commented on a change in pull request #17416: [FLINK-24459] Performance improvement of file sink

2021-10-09 Thread GitBox


hililiwei commented on a change in pull request #17416:
URL: https://github.com/apache/flink/pull/17416#discussion_r725465714



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/PartitionPathUtils.java
##
@@ -113,19 +113,36 @@ private static String escapePathName(String path) {
 throw new TableException("Path should not be null or empty: " + 
path);
 }
 
-StringBuilder sb = new StringBuilder();
+StringBuilder sb = null;
 for (int i = 0; i < path.length(); i++) {
 char c = path.charAt(i);
 if (needsEscaping(c)) {
-sb.append('%');
-sb.append(String.format("%1$02X", (int) c));
-} else {
+if (sb == null) {
+sb = new StringBuilder(path.length() + 2);
+for (int j = 0; j < i; j++) {
+sb.append(path.charAt(j));
+}
+}
+escapeChar(c, sb);
+} else if (sb != null) {
 sb.append(c);
 }
 }

Review comment:
   Will an incorrect result be returned if the character need to be escaped 
is in the middle?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24492) incorrect implicit type conversion between numeric and (var)char

2021-10-09 Thread xuyangzhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyangzhong updated FLINK-24492:

Description: 
The result of the sql "select 1 = '1'" is false. This is caused by the CodeGen. 
CodeGen  incorrectly transform this "=" to "BinaryStringData.equals (int 1)". 
And "<>" has the same wrong result.

In my opinion, "=" should have the same behavior with ">" and "<", which have 
the correct results. So before calcite solves this bug or flink supports this 
kind of implicit type conversion, we'd better temporarily forbidding this 
implicit type conversion in "=" and "<>".

  was:
The result of the sql "select 1 = '1'" is false. This is caused by the CodeGen. 
CodeGen  incorrectly transform this "=" to "BinaryStringData.equals (int 1)". 
And "<>" has the same wrong result.

In my opinion, "=" should have the same behavior with ">" and "<", which have 
the correct results. So before calcite solves this bug or flink support this 
kind of implicit type conversion, we'd better temporarily forbidding this 
implicit type conversion in "=" and "<>".


> incorrect implicit type conversion between numeric and (var)char
> 
>
> Key: FLINK-24492
> URL: https://issues.apache.org/jira/browse/FLINK-24492
> Project: Flink
>  Issue Type: Bug
>Reporter: xuyangzhong
>Priority: Minor
>
> The result of the sql "select 1 = '1'" is false. This is caused by the 
> CodeGen. CodeGen  incorrectly transform this "=" to "BinaryStringData.equals 
> (int 1)". And "<>" has the same wrong result.
> In my opinion, "=" should have the same behavior with ">" and "<", which have 
> the correct results. So before calcite solves this bug or flink supports this 
> kind of implicit type conversion, we'd better temporarily forbidding this 
> implicit type conversion in "=" and "<>".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24492) incorrect implicit type conversion between numeric and (var)char

2021-10-09 Thread xuyangzhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyangzhong updated FLINK-24492:

Description: 
The result of the sql "select 1 = '1'" is false. This is caused by the CodeGen. 
CodeGen  incorrectly transform this "=" to "BinaryStringData.equals (int 1)". 
And "<>" has the same wrong result.

In my opinion, "=" should have the same behavior with ">" and "<", which have 
the correct results. So before calcite solves this bug or flink support this 
kind of implicit type conversion, we'd better temporarily forbidding this 
implicit type conversion in "=" and "<>".

  was:
The result of the sql "select 1 = '1'" is false. This is caused by the CodeGen. 
CodeGen  incorrectly transform this "=" to "BinaryStringData.equals (int 1)". 
And "<>" has the same wrong result.

In my opinion, "=" should have the same behavior with ">" and "<", which have 
the correct results. So before calcite solves this bug, we'd better temporarily 
forbidding this implicit type conversion in "=" and "<>".


> incorrect implicit type conversion between numeric and (var)char
> 
>
> Key: FLINK-24492
> URL: https://issues.apache.org/jira/browse/FLINK-24492
> Project: Flink
>  Issue Type: Bug
>Reporter: xuyangzhong
>Priority: Minor
>
> The result of the sql "select 1 = '1'" is false. This is caused by the 
> CodeGen. CodeGen  incorrectly transform this "=" to "BinaryStringData.equals 
> (int 1)". And "<>" has the same wrong result.
> In my opinion, "=" should have the same behavior with ">" and "<", which have 
> the correct results. So before calcite solves this bug or flink support this 
> kind of implicit type conversion, we'd better temporarily forbidding this 
> implicit type conversion in "=" and "<>".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24492) incorrect implicit type conversion between numeric and (var)char

2021-10-09 Thread xuyangzhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426562#comment-17426562
 ] 

xuyangzhong commented on FLINK-24492:
-

link https://issues.apache.org/jira/browse/FLINK-18234

> incorrect implicit type conversion between numeric and (var)char
> 
>
> Key: FLINK-24492
> URL: https://issues.apache.org/jira/browse/FLINK-24492
> Project: Flink
>  Issue Type: Bug
>Reporter: xuyangzhong
>Priority: Minor
>
> The result of the sql "select 1 = '1'" is false. This is caused by the 
> CodeGen. CodeGen  incorrectly transform this "=" to "BinaryStringData.equals 
> (int 1)". And "<>" has the same wrong result.
> In my opinion, "=" should have the same behavior with ">" and "<", which have 
> the correct results. So before calcite solves this bug, we'd better 
> temporarily forbidding this implicit type conversion in "=" and "<>".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18234) Implicit type conversion in join condition

2021-10-09 Thread xuyangzhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426561#comment-17426561
 ] 

xuyangzhong commented on FLINK-18234:
-

link https://issues.apache.org/jira/browse/FLINK-24492

> Implicit type conversion in join condition
> --
>
> Key: FLINK-18234
> URL: https://issues.apache.org/jira/browse/FLINK-18234
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.1
>Reporter: YufeiLiu
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Execute sql "SELECT a1, b1 FROM A JOIN B ON a2 = b4", a2(BIGINT) b4(VARCHAR) 
> will throw exception 
> {code}
> org.apache.flink.table.api.TableException: VARCHAR(2147483647) and INTEGER 
> does not have common type now
>   at 
> org.apache.flink.table.planner.plan.rules.logical.JoinConditionTypeCoerceRule$$anonfun$onMatch$1.apply(JoinConditionTypeCoerceRule.scala:76)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.JoinConditionTypeCoerceRule$$anonfun$onMatch$1.apply(JoinConditionTypeCoerceRule.scala:65)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.JoinConditionTypeCoerceRule.onMatch(JoinConditionTypeCoerceRule.scala:65)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:328)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:562)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:427)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:264)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:223)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:210)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
> {code}
> Should we do some implicit type coercion in this case? It works on old 
> version, and also can use in WHERE condition like "WHERE a4 = 3"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #17443: [FLINK-9465][Runtime/Checkpointing] Specify a separate savepoint timeout option via CLI and REST API

2021-10-09 Thread GitBox


flinkbot commented on pull request #17443:
URL: https://github.com/apache/flink/pull/17443#issuecomment-939267795


   
   ## CI report:
   
   * 79df4a732b55a8e7a0179b337776e36f7df5c83e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17438: [FLINK-24389] [Table SQL / API] Fix NPE in CatalogTableImpl#getDescription

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17438:
URL: https://github.com/apache/flink/pull/17438#issuecomment-938665309


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * 0c66bb628bffb8b866f4840c2170778382b86908 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-24492) incorrect implicit type conversion between numeric and (var)char

2021-10-09 Thread xuyangzhong (Jira)
xuyangzhong created FLINK-24492:
---

 Summary: incorrect implicit type conversion between numeric and 
(var)char
 Key: FLINK-24492
 URL: https://issues.apache.org/jira/browse/FLINK-24492
 Project: Flink
  Issue Type: Bug
Reporter: xuyangzhong


The result of the sql "select 1 = '1'" is false. This is caused by the CodeGen. 
CodeGen  incorrectly transform this "=" to "BinaryStringData.equals (int 1)". 
And "<>" has the same wrong result.

In my opinion, "=" should have the same behavior with ">" and "<", which have 
the correct results. So before calcite solves this bug, we'd better temporarily 
forbidding this implicit type conversion in "=" and "<>".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #17438: [FLINK-24389] [Table SQL / API] Fix NPE in CatalogTableImpl#getDescription

2021-10-09 Thread GitBox


wuchong commented on pull request #17438:
URL: https://github.com/apache/flink/pull/17438#issuecomment-939267119


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-9465) Specify a separate savepoint timeout option via CLI

2021-10-09 Thread Feifan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426559#comment-17426559
 ] 

Feifan Wang commented on FLINK-9465:


Hi [~trohrmann], I open a pull request to resolve this, but there are still 
some unit test that I think need to be complete. Can you take a glance over 
this PR and give me some guidance on the unit test ?

> Specify a separate savepoint timeout option via CLI
> ---
>
> Key: FLINK-9465
> URL: https://issues.apache.org/jira/browse/FLINK-9465
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.5.0
>Reporter: Truong Duc Kien
>Assignee: Feifan Wang
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Savepoint can take much longer time to perform than checkpoint, especially 
> with incremental checkpoint enabled. This leads to a couple of troubles:
>  * For our job, we currently have to set the checkpoint timeout much large 
> than necessary, otherwise we would be unable to perform savepoint. 
>  * During rush hour, our cluster would encounter high rate of checkpoint 
> timeout due to backpressure, however we're unable to migrate to a larger 
> configuration, because savepoint also timeout.
> In my opinion, the timeout for savepoint should be configurable separately, 
> both in the config file and as parameter to the savepoint command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe closed pull request #16192: [FLINK-22954][table-planner-blink] Rewrite Join on constant TableFunctionScan to Correlate

2021-10-09 Thread GitBox


godfreyhe closed pull request #16192:
URL: https://github.com/apache/flink/pull/16192


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] nizhikov commented on a change in pull request #17438: [FLINK-24389] [Table SQL / API] Fix NPE in CatalogTableImpl#getDescription

2021-10-09 Thread GitBox


nizhikov commented on a change in pull request #17438:
URL: https://github.com/apache/flink/pull/17438#discussion_r725462366



##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/AbstractCatalogTable.java
##
@@ -37,18 +39,18 @@
 // Properties of the table
 private final Map options;
 // Comment of the table
-private final String comment;
+@Nullable private final String comment;
 
 public AbstractCatalogTable(
-TableSchema tableSchema, Map options, String 
comment) {
+TableSchema tableSchema, Map options, @Nullable 
String comment) {
 this(tableSchema, new ArrayList<>(), options, comment);
 }
 
 public AbstractCatalogTable(
 TableSchema tableSchema,
 List partitionKeys,
 Map options,
-String comment) {
+@Nullable String comment) {

Review comment:
   Thanks for the feedback.
   Patch reworked according to your proposal.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17443: [FLINK-9465][Runtime/Checkpointing] Specify a separate savepoint timeout option via CLI and REST API

2021-10-09 Thread GitBox


flinkbot commented on pull request #17443:
URL: https://github.com/apache/flink/pull/17443#issuecomment-939264095


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 79df4a732b55a8e7a0179b337776e36f7df5c83e (Sat Oct 09 
09:23:35 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zoltar9264 opened a new pull request #17443: [FLINK-9465][Runtime/Checkpointing] Specify a separate savepoint timeout option via CLI and REST API

2021-10-09 Thread GitBox


zoltar9264 opened a new pull request #17443:
URL: https://github.com/apache/flink/pull/17443


   ## What is the purpose of the change
   
   This pull request support specify a separate savepoint timeout option via 
REST API and CLI, which is decribed in 
[FLINK-9465](https://issues.apache.org/jira/browse/FLINK-9465).
   
   
   ## Brief change log
   
 - *CheckpointCoordinator.CheckpointTriggerRequest* add savepointTimeout 
field,
 - *CheckpointCoordinator#createPendingCheckpoint()* add a "timeout" 
parameter, and use it as canceller trigger delay if it > 0,
 - *CheckpointCoordinator, RestfulGateway, 
JobMasterGateway,SchedulerNG,ClusterClient* add 
triggerSavepoint/stopWithSavepoint method with parameter "savepointTimeout"
 - *CliFrontend#savepoint(), CliFrontend#stop(), SavepointTriggerHandler, 
StopWithSavepointHandler* migrate to method with "savepointTimeout"
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: yes
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17426: [FLINK-24167][Runtime]Add default HeartbeatReceiver and HeartbeatSend…

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17426:
URL: https://github.com/apache/flink/pull/17426#issuecomment-938516939


   
   ## CI report:
   
   * b3f889f3c75ca26e6e54e45694d207e529d9dd43 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24889)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24448) Tumbling Window Not working with EPOOCH Time converted using TO_TIMESTAMP_LTZ

2021-10-09 Thread Mehul Batra (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehul Batra updated FLINK-24448:

Description: 
*When I am running my code to test the connector = 'print' to see my window 
aggregated data it is not printing anything in version 1.14.0 and when I am 
excluding the tumbling window it is printing data, but the same code is working 
with the tumble window in FLINK 1.13.1.* 

SQL API CONNECTORS TO REFER :

tableEnv.executeSql("CREATE TABLE IF NOT EXISTS Teamtopic (\n"
 + " eventName String,\n"
 + " ingestion_time BIGINT,\n"
 + " t_ltz as TO_TIMESTAMP_LTZ(ingestion_time,3) , 
 + " WATERMARK FOR t_ltz AS t_ltz - INTERVAL '5' SECOND 
 + " as event-time attribute\n"
 + ") WITH (\n"
 + " 'connector' = 'kafka'

tableEnv.executeSql("CREATE TABLE minutess (\n"
 + " `minute` TIMESTAMP(3),\n"
 + " hits BIGINT ,\n"
 + " type STRING\n"
 + ") WITH (\n"
 + " 'connector' = 'print' "
 + ")");

tableEnv.createStatementSet()
 .addInsertSql("INSERT INTO minutess \n"
 + " SELECT "
 + "TUMBLE_END(t_ltz,INTERVAL '1' MINUTE) AS windowmin ,"
 + "COUNT(eventName) as hits, "
 + "'team_save_failed_minute_error_types' as type\n"
 + " FROM TeamSaveFailed\n"
 +" GROUP BY TUMBLE(t_ltz, INTERVAL '1' MINUTE ),eventName")
 .execute();

  was:
*When I am running my code to test the connector = 'print' to see my window 
aggregated data it is not printing anything and when I am excluding the 
tumbling window it is printing data, but the same code is working with the 
tumble window in FLINK 1.13.1.* 



SQL API CONNECTORS TO REFER :


tableEnv.executeSql("CREATE TABLE IF NOT EXISTS Teamtopic (\n"
 + " eventName String,\n"
 + " ingestion_time BIGINT,\n"
 + " t_ltz as TO_TIMESTAMP_LTZ(ingestion_time,3) , 
 + " WATERMARK FOR t_ltz AS t_ltz - INTERVAL '5' SECOND 
 + " as event-time attribute\n"
 + ") WITH (\n"
 + " 'connector' = 'kafka'




tableEnv.executeSql("CREATE TABLE minutess (\n"
 + " `minute` TIMESTAMP(3),\n"
 + " hits BIGINT ,\n"
 + " type STRING\n"
 + ") WITH (\n"
 + " 'connector' = 'print' "
 + ")");



tableEnv.createStatementSet()
 .addInsertSql("INSERT INTO minutess \n"
 + " SELECT "
 + "TUMBLE_END(t_ltz,INTERVAL '1' MINUTE) AS windowmin ,"
 + "COUNT(eventName) as hits, "
 + "'team_save_failed_minute_error_types' as type\n"
 + " FROM TeamSaveFailed\n"
 +" GROUP BY TUMBLE(t_ltz, INTERVAL '1' MINUTE ),eventName")
 .execute();


> Tumbling Window Not working with EPOOCH Time converted using TO_TIMESTAMP_LTZ
> -
>
> Key: FLINK-24448
> URL: https://issues.apache.org/jira/browse/FLINK-24448
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Reporter: Mehul Batra
>Priority: Major
>
> *When I am running my code to test the connector = 'print' to see my window 
> aggregated data it is not printing anything in version 1.14.0 and when I am 
> excluding the tumbling window it is printing data, but the same code is 
> working with the tumble window in FLINK 1.13.1.* 
> SQL API CONNECTORS TO REFER :
> tableEnv.executeSql("CREATE TABLE IF NOT EXISTS Teamtopic (\n"
>  + " eventName String,\n"
>  + " ingestion_time BIGINT,\n"
>  + " t_ltz as TO_TIMESTAMP_LTZ(ingestion_time,3) , 
>  + " WATERMARK FOR t_ltz AS t_ltz - INTERVAL '5' SECOND 
>  + " as event-time attribute\n"
>  + ") WITH (\n"
>  + " 'connector' = 'kafka'
> tableEnv.executeSql("CREATE TABLE minutess (\n"
>  + " `minute` TIMESTAMP(3),\n"
>  + " hits BIGINT ,\n"
>  + " type STRING\n"
>  + ") WITH (\n"
>  + " 'connector' = 'print' "
>  + ")");
> tableEnv.createStatementSet()
>  .addInsertSql("INSERT INTO minutess \n"
>  + " SELECT "
>  + "TUMBLE_END(t_ltz,INTERVAL '1' MINUTE) AS windowmin ,"
>  + "COUNT(eventName) as hits, "
>  + "'team_save_failed_minute_error_types' as type\n"
>  + " FROM TeamSaveFailed\n"
>  +" GROUP BY TUMBLE(t_ltz, INTERVAL '1' MINUTE ),eventName")
>  .execute();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24448) Tumbling Window Not working with EPOOCH Time converted using TO_TIMESTAMP_LTZ

2021-10-09 Thread Mehul Batra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426555#comment-17426555
 ] 

Mehul Batra commented on FLINK-24448:
-

Version 1.14.0

> Tumbling Window Not working with EPOOCH Time converted using TO_TIMESTAMP_LTZ
> -
>
> Key: FLINK-24448
> URL: https://issues.apache.org/jira/browse/FLINK-24448
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Reporter: Mehul Batra
>Priority: Major
>
> *When I am running my code to test the connector = 'print' to see my window 
> aggregated data it is not printing anything and when I am excluding the 
> tumbling window it is printing data, but the same code is working with the 
> tumble window in FLINK 1.13.1.* 
> SQL API CONNECTORS TO REFER :
> tableEnv.executeSql("CREATE TABLE IF NOT EXISTS Teamtopic (\n"
>  + " eventName String,\n"
>  + " ingestion_time BIGINT,\n"
>  + " t_ltz as TO_TIMESTAMP_LTZ(ingestion_time,3) , 
>  + " WATERMARK FOR t_ltz AS t_ltz - INTERVAL '5' SECOND 
>  + " as event-time attribute\n"
>  + ") WITH (\n"
>  + " 'connector' = 'kafka'
> tableEnv.executeSql("CREATE TABLE minutess (\n"
>  + " `minute` TIMESTAMP(3),\n"
>  + " hits BIGINT ,\n"
>  + " type STRING\n"
>  + ") WITH (\n"
>  + " 'connector' = 'print' "
>  + ")");
> tableEnv.createStatementSet()
>  .addInsertSql("INSERT INTO minutess \n"
>  + " SELECT "
>  + "TUMBLE_END(t_ltz,INTERVAL '1' MINUTE) AS windowmin ,"
>  + "COUNT(eventName) as hits, "
>  + "'team_save_failed_minute_error_types' as type\n"
>  + " FROM TeamSaveFailed\n"
>  +" GROUP BY TUMBLE(t_ltz, INTERVAL '1' MINUTE ),eventName")
>  .execute();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24122) Add support to do clean in history server

2021-10-09 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426554#comment-17426554
 ] 

zlzhang0122 commented on FLINK-24122:
-

[~trohrmann] [~pnowojski] [~jark] [~gyfora] what do you think? Any suggestion 
is very appreciate!

> Add support to do clean in history server
> -
>
> Key: FLINK-24122
> URL: https://issues.apache.org/jira/browse/FLINK-24122
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.12.3, 1.13.2
>Reporter: zlzhang0122
>Priority: Minor
> Fix For: 1.14.1
>
>
> Now, the history server can clean history jobs by two means:
>  # if users have configured 
> {code:java}
> historyserver.archive.clean-expired-jobs: true{code}
> , then compare the files in hdfs over two clean interval and find the delete 
> and clean the local cache file.
>  # if users have configured the 
> {code:java}
> historyserver.archive.retained-jobs:{code}
> a positive number, then clean the oldest files in hdfs and local.
> But the retained-jobs number is difficult to determine.
> For example, users may want to check the history jobs yesterday while many 
> jobs failed today and exceed the retained-jobs number, then the history jobs 
> of yesterday will be delete. So what if add a configuration which contain a 
> retained-times that indicate the max time the history job retain?
> Also it can't clean the job history files which was no longer in hdfs but 
> still cached in local filesystem and these files will store forever and can't 
> be cleaned unless users manually do this. Maybe we can give a option and do 
> this clean if the option says true.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   *  Unknown: [CANCELED](TBD) 
   * 54a7d0d8f8dad4d7d22db13e405534f95fd32ce3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24894)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24122) Add support to do clean in history server

2021-10-09 Thread zlzhang0122 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zlzhang0122 updated FLINK-24122:

Description: 
Now, the history server can clean history jobs by two means:
 # if users have configured 
{code:java}
historyserver.archive.clean-expired-jobs: true{code}
, then compare the files in hdfs over two clean interval and find the delete 
and clean the local cache file.

 # if users have configured the 
{code:java}
historyserver.archive.retained-jobs:{code}
a positive number, then clean the oldest files in hdfs and local.

But the retained-jobs number is difficult to determine.

For example, users may want to check the history jobs yesterday while many jobs 
failed today and exceed the retained-jobs number, then the history jobs of 
yesterday will be delete. So what if add a configuration which contain a 
retained-times that indicate the max time the history job retain?

Also it can't clean the job history files which was no longer in hdfs but still 
cached in local filesystem and these files will store forever and can't be 
cleaned unless users manually do this. Maybe we can give a option and do this 
clean if the option says true.

  was:
Now, the history server can clean history jobs by two means:
 # if users have configured 
{code:java}
historyserver.archive.clean-expired-jobs: true{code}
, then compare the files in hdfs over two clean interval and find the delete 
and clean the local cache file.

 # if users have configured the 
{code:java}
historyserver.archive.retained-jobs:{code}
a positive number, then clean the oldest files in hdfs and local.

But the retained-jobs number is difficult to determine.

For example, users may want to check the history jobs yesterday while many jobs 
failed today and exceed the retained-jobs number, then the history jobs of 
yesterday will be delete. So what if add a configuration which contain a 
retained-times that indicate the max time the history job retain?

Also it can't clean the job history which was no longer in hdfs but still 
cached in local filesystem and these files will store forever and can't be 
cleaned unless users manually do this. Maybe we can give a option and do this 
clean if the option says true.


> Add support to do clean in history server
> -
>
> Key: FLINK-24122
> URL: https://issues.apache.org/jira/browse/FLINK-24122
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.12.3, 1.13.2
>Reporter: zlzhang0122
>Priority: Minor
> Fix For: 1.14.1
>
>
> Now, the history server can clean history jobs by two means:
>  # if users have configured 
> {code:java}
> historyserver.archive.clean-expired-jobs: true{code}
> , then compare the files in hdfs over two clean interval and find the delete 
> and clean the local cache file.
>  # if users have configured the 
> {code:java}
> historyserver.archive.retained-jobs:{code}
> a positive number, then clean the oldest files in hdfs and local.
> But the retained-jobs number is difficult to determine.
> For example, users may want to check the history jobs yesterday while many 
> jobs failed today and exceed the retained-jobs number, then the history jobs 
> of yesterday will be delete. So what if add a configuration which contain a 
> retained-times that indicate the max time the history job retain?
> Also it can't clean the job history files which was no longer in hdfs but 
> still cached in local filesystem and these files will store forever and can't 
> be cleaned unless users manually do this. Maybe we can give a option and do 
> this clean if the option says true.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24489) The size of entryCache in the SharedBuffer should be defined with a threshold

2021-10-09 Thread Roc Marshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426550#comment-17426550
 ] 

Roc Marshal commented on FLINK-24489:
-

cc [~jark] [~aljoscha] [~pnowojski] .

> The size of entryCache in the SharedBuffer should be defined with a threshold
> -
>
> Key: FLINK-24489
> URL: https://issues.apache.org/jira/browse/FLINK-24489
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / CEP
>Affects Versions: 1.10.0, 1.11.0, 1.12.0, 1.13.0, 1.14.0
>Reporter: Roc Marshal
>Priority: Major
>
> [here|https://github.com/apache/flink/blob/c3cb886ee73b5fee23b2bccff0f5e4d45a30b3a1/flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/sharedbuffer/SharedBuffer.java#L79]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-ml] yunfengzhou-hub commented on a change in pull request #8: [FLINK-4][iteration] Add operator wrapper for all-round iterations.

2021-10-09 Thread GitBox


yunfengzhou-hub commented on a change in pull request #8:
URL: https://github.com/apache/flink-ml/pull/8#discussion_r725457783



##
File path: 
flink-ml-iteration/src/main/java/org/apache/flink/ml/iteration/proxy/ProxyOutput.java
##
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.iteration.proxy;
+
+import org.apache.flink.ml.iteration.IterationRecord;
+import org.apache.flink.ml.iteration.typeinfo.IterationRecordTypeInfo;
+import org.apache.flink.streaming.api.operators.Output;
+import org.apache.flink.streaming.api.watermark.Watermark;
+import org.apache.flink.streaming.runtime.streamrecord.LatencyMarker;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.util.OutputTag;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/** Proxy output to provide to the wrapped operator. */
+public class ProxyOutput implements Output> {
+
+private final Output>> output;
+
+private final StreamRecord> reuseRecord;
+
+private final Map sideOutputCaches = new 
HashMap<>();
+
+private Integer contextRound;
+
+public ProxyOutput(Output>> output) {
+this.output = Objects.requireNonNull(output);
+this.reuseRecord = new StreamRecord<>(IterationRecord.newRecord(null, 
0));
+}
+
+public void setContextRound(Integer contextRound) {
+this.contextRound = contextRound;
+}
+
+@Override
+public void emitWatermark(Watermark mark) {
+output.emitWatermark(mark);
+}
+
+@Override
+@SuppressWarnings({"unchecked", "rawtypes"})
+public  void collect(OutputTag outputTag, StreamRecord record) {
+SideOutputCache sideOutputCache =
+sideOutputCaches.computeIfAbsent(
+outputTag.getId(),
+(ignored) ->
+new SideOutputCache(
+new OutputTag>(
+outputTag.getId(),
+new IterationRecordTypeInfo(
+
outputTag.getTypeInfo())),
+new 
StreamRecord<>(IterationRecord.newRecord(null, 0;
+sideOutputCache.cachedRecord.replace(
+IterationRecord.newRecord(record.getValue(), contextRound), 
record.getTimestamp());
+output.collect(sideOutputCache.tag, sideOutputCache.cachedRecord);
+}
+
+@Override
+public void emitLatencyMarker(LatencyMarker latencyMarker) {
+output.emitLatencyMarker(latencyMarker);
+}
+
+@Override
+public void collect(StreamRecord tStreamRecord) {
+reuseRecord.getValue().setValue(tStreamRecord.getValue());
+reuseRecord.getValue().setRound(contextRound);

Review comment:
   The round or epoch of a record does not increase when it finishes a loop 
and returns to `HeadOperator`. Instead, it increases in this `ProxyOutput` with 
`contextRound`, whose value equals to epoch watermark. What is the 
consideration of this design choice over the other?
   
   From my perspective, If the round of a record always equals to the previous 
epoch watermark, it might be unnecessary to assign `round` to each record. 
Operators inside the iteration could keep a context of the current round and 
make it the default round value for the incoming records.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-21345) NullPointerException LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:157

2021-10-09 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-21345.
--
Resolution: Fixed

Fixed in 1.15.0: 24e6121d5f882e55dfc0616b1da81dc0b46f2d34
Fixed in 1.14.1: 3598a16f4d2d46b75f15a4eb01610ecfe2640f1e

> NullPointerException 
> LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:157
> --
>
> Key: FLINK-21345
> URL: https://issues.apache.org/jira/browse/FLINK-21345
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.1
> Environment: Planner: BlinkPlanner
> Flink Version: 1.12.1_2.11
> Java Version: 1.8
> OS: mac os
>Reporter: Lyn Zhang
>Assignee: Lyn Zhang
>Priority: Minor
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.15.0, 1.14.1
>
> Attachments: image-2021-02-10-16-00-45-553.png
>
>
> First Step: Create 2 Source Tables as below:
> {code:java}
> CREATE TABLE test_streaming(
>  vid BIGINT,
>  ts BIGINT,
>  proc AS proctime()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'test-streaming',
>  'properties.bootstrap.servers' = '127.0.0.1:9092',
>  'scan.startup.mode' = 'latest-offset',
>  'format' = 'json'
> );
> CREATE TABLE test_streaming2(
>  vid BIGINT,
>  ts BIGINT,
>  proc AS proctime()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'test-streaming2',
>  'properties.bootstrap.servers' = '127.0.0.1:9092',
>  'scan.startup.mode' = 'latest-offset',
>  'format' = 'json'
> );
> {code}
> Second Step: Create a TEMPORARY Table Function, function name:dim, key:vid, 
> timestamp:proctime()
> Third Step: test_streaming union all  test_streaming2 join dim like below:
> {code:java}
> SELECT r.vid,d.name,timestamp_from_long(r.ts)
> FROM (
> SELECT * FROM test_streaming UNION ALL SELECT * FROM test_streaming2
> ) AS r,
> LATERAL TABLE (dim(r.proc)) AS d
> WHERE r.vid = d.vid;
> {code}
> Exception Detail: (if only use test-streaming or test-streaming2 join 
> temporary table function, the program run ok)
> {code:java}
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalCorrelateToJoinFromTemporalTableFunctionRule.getRelOptSchema(LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:157)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalCorrelateToJoinFromTemporalTableFunctionRule.onMatch(LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:99)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:155)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:155)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:155)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:155)
>   at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   *  Unknown: [CANCELED](TBD) 
   * 54a7d0d8f8dad4d7d22db13e405534f95fd32ce3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24480) EqualiserCodeGeneratorTest fails on azure

2021-10-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-24480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426532#comment-17426532
 ] 

Ingo Bürk commented on FLINK-24480:
---

[~jark] I'm currently on vacation but can have a look when I'm back.

> EqualiserCodeGeneratorTest fails on azure
> -
>
> Key: FLINK-24480
> URL: https://issues.apache.org/jira/browse/FLINK-24480
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.5, 1.13.2
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.6, 1.13.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24809=logs=955770d3-1fed-5a0a-3db6-0c7554c910cb=14447d61-56b4-5000-80c1-daa459247f6a=42615
> {code}
> Oct 07 01:11:46 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 8.236 s <<< FAILURE! - in 
> org.apache.flink.table.planner.codegen.EqualiserCodeGeneratorTest
> Oct 07 01:11:46 [ERROR] 
> testManyFields(org.apache.flink.table.planner.codegen.EqualiserCodeGeneratorTest)
>   Time elapsed: 8.21 s  <<< FAILURE!
> Oct 07 01:11:46 java.lang.AssertionError: Expected compilation to succeed
> Oct 07 01:11:46   at org.junit.Assert.fail(Assert.java:88)
> Oct 07 01:11:46   at 
> org.apache.flink.table.planner.codegen.EqualiserCodeGeneratorTest.testManyFields(EqualiserCodeGeneratorTest.java:102)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   *  Unknown: [CANCELED](TBD) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17436: [FLINK-15987]SELECT 1.0e0 / 0.0e0 throws NumberFormatException

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17436:
URL: https://github.com/apache/flink/pull/17436#issuecomment-938641830


   
   ## CI report:
   
   * d14a0793a2628a8c9bd778ccbee95e7419f15346 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24857)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24891)
 
   * 5d0f5029884cd1df7cee4b455952122fce5046ed Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24893)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-898859870


   
   ## CI report:
   
   * 860f7acaf90b23fbff802792749ef1bd0479a414 UNKNOWN
   *  Unknown: [CANCELED](TBD) 
   * 30735755524707e1a408a29fc59404479b7ac5c3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24888)
 
   * 54a7d0d8f8dad4d7d22db13e405534f95fd32ce3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] horacehylee commented on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


horacehylee commented on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-939244716


   @xintongsong should be all good now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] horacehylee commented on pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options

2021-10-09 Thread GitBox


horacehylee commented on pull request #16821:
URL: https://github.com/apache/flink/pull/16821#issuecomment-939244581


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-24491) ExecutionGraphInfo may not be archived when the dispatcher terminates

2021-10-09 Thread Zhilong Hong (Jira)
Zhilong Hong created FLINK-24491:


 Summary: ExecutionGraphInfo may not be archived when the 
dispatcher terminates
 Key: FLINK-24491
 URL: https://issues.apache.org/jira/browse/FLINK-24491
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Configuration
Affects Versions: 1.13.2, 1.14.0, 1.15.0
Reporter: Zhilong Hong
 Fix For: 1.13.3, 1.15.0, 1.14.1


When a job finishes, its JobManagerRunnerResult will be processed in the 
callback of {{Dispatcher#runJob}}. In the callback, ExecutionGraphInfo will be 
archived by HistoryServerArchivist asynchronously. However, the 
CompletableFuture of the archiving is ignored. The job may be removed before 
the archiving is finished. For the batch job running in the per-job/application 
mode, the dispatcher will terminate itself once the job is finished. In this 
case, ExecutionGraphInfo may not be archived when the dispatcher terminates.

If the ExecutionGraphInfo is lost, users are not able to know whether the batch 
job is finished normally or not. They have to refer to the logs for the result.

The session mode is not affected, since the dispatcher won't terminate itself 
once the job is finished. The HistoryServerArchivist gets enough time to 
archive the ExcutionGraphInfo.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17436: [FLINK-15987]SELECT 1.0e0 / 0.0e0 throws NumberFormatException

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17436:
URL: https://github.com/apache/flink/pull/17436#issuecomment-938641830


   
   ## CI report:
   
   * d14a0793a2628a8c9bd778ccbee95e7419f15346 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24857)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24891)
 
   * 5d0f5029884cd1df7cee4b455952122fce5046ed UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24390) Python 'build_wheels mac' fails on azure

2021-10-09 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-24390.
---
Resolution: Fixed

Fixed in release-1.12 via 5dd50ef0e69eb104ad99a1a09f4e0e15b16cf647

> Python 'build_wheels mac' fails on azure
> 
>
> Key: FLINK-24390
> URL: https://issues.apache.org/jira/browse/FLINK-24390
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Build System
>Affects Versions: 1.12.5
>Reporter: Xintong Song
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.6
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24547=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=789348ee-cf3e-5c4b-7c78-355970e5f360=17982



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu merged pull request #17433: [FLINK-24390][python] Limit the grpcio version <= 1.40.0 for Python 3.5

2021-10-09 Thread GitBox


dianfu merged pull request #17433:
URL: https://github.com/apache/flink/pull/17433


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] link3280 edited a comment on pull request #17320: [FLINK-24319][Tests] Fix invalid surefire work directory for flink-tests

2021-10-09 Thread GitBox


link3280 edited a comment on pull request #17320:
URL: https://github.com/apache/flink/pull/17320#issuecomment-939240605


   @zentol PTAL, this blocks users from running Flink tests outside of the 
official CI pipeline.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] link3280 commented on pull request #17320: [FLINK-24319][Tests] Fix invalid surefire work directory for flink-tests

2021-10-09 Thread GitBox


link3280 commented on pull request #17320:
URL: https://github.com/apache/flink/pull/17320#issuecomment-939240605


   @zentol PTAL, this blocks users from running Flink tests outside of official 
CI pipelines.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] link3280 commented on pull request #17318: [hotfix][tests] Azure tests fail prematurely because timeout watchdog fails to sleep

2021-10-09 Thread GitBox


link3280 commented on pull request #17318:
URL: https://github.com/apache/flink/pull/17318#issuecomment-939240034


   @rmetzger PTAL, a quick one.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17436: [FLINK-15987]SELECT 1.0e0 / 0.0e0 throws NumberFormatException

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17436:
URL: https://github.com/apache/flink/pull/17436#issuecomment-938641830


   
   ## CI report:
   
   * d14a0793a2628a8c9bd778ccbee95e7419f15346 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24857)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24891)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-21345) NullPointerException LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:157

2021-10-09 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-21345:
--

Assignee: Lyn Zhang

> NullPointerException 
> LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:157
> --
>
> Key: FLINK-21345
> URL: https://issues.apache.org/jira/browse/FLINK-21345
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.1
> Environment: Planner: BlinkPlanner
> Flink Version: 1.12.1_2.11
> Java Version: 1.8
> OS: mac os
>Reporter: Lyn Zhang
>Assignee: Lyn Zhang
>Priority: Minor
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.15.0, 1.14.1
>
> Attachments: image-2021-02-10-16-00-45-553.png
>
>
> First Step: Create 2 Source Tables as below:
> {code:java}
> CREATE TABLE test_streaming(
>  vid BIGINT,
>  ts BIGINT,
>  proc AS proctime()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'test-streaming',
>  'properties.bootstrap.servers' = '127.0.0.1:9092',
>  'scan.startup.mode' = 'latest-offset',
>  'format' = 'json'
> );
> CREATE TABLE test_streaming2(
>  vid BIGINT,
>  ts BIGINT,
>  proc AS proctime()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'test-streaming2',
>  'properties.bootstrap.servers' = '127.0.0.1:9092',
>  'scan.startup.mode' = 'latest-offset',
>  'format' = 'json'
> );
> {code}
> Second Step: Create a TEMPORARY Table Function, function name:dim, key:vid, 
> timestamp:proctime()
> Third Step: test_streaming union all  test_streaming2 join dim like below:
> {code:java}
> SELECT r.vid,d.name,timestamp_from_long(r.ts)
> FROM (
> SELECT * FROM test_streaming UNION ALL SELECT * FROM test_streaming2
> ) AS r,
> LATERAL TABLE (dim(r.proc)) AS d
> WHERE r.vid = d.vid;
> {code}
> Exception Detail: (if only use test-streaming or test-streaming2 join 
> temporary table function, the program run ok)
> {code:java}
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalCorrelateToJoinFromTemporalTableFunctionRule.getRelOptSchema(LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:157)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalCorrelateToJoinFromTemporalTableFunctionRule.onMatch(LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:99)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:155)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:155)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:155)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:155)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:155)
>   at 

[GitHub] [flink] flinkbot edited a comment on pull request #17426: [FLINK-24167][Runtime]Add default HeartbeatReceiver and HeartbeatSend…

2021-10-09 Thread GitBox


flinkbot edited a comment on pull request #17426:
URL: https://github.com/apache/flink/pull/17426#issuecomment-938516939


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * b3f889f3c75ca26e6e54e45694d207e529d9dd43 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24889)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe closed pull request #14916: [FLINK-21345][Table SQL / Planner] Fix BUG of Union All join Temporal…

2021-10-09 Thread GitBox


godfreyhe closed pull request #14916:
URL: https://github.com/apache/flink/pull/14916


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-10230) Support 'SHOW CREATE VIEW' syntax to print the query of a view

2021-10-09 Thread Roc Marshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425935#comment-17425935
 ] 

Roc Marshal edited comment on FLINK-10230 at 10/9/21, 6:17 AM:
---

Could you [~MartijnVisser]  help me to merge it if there are nothing 
inappropriate ? Thank you.


was (Author: rocmarshal):
Could someone help me to merge it if there are nothing inappropriate ? Thank 
you.

> Support 'SHOW CREATE VIEW' syntax to print the query of a view
> --
>
> Key: FLINK-10230
> URL: https://issues.apache.org/jira/browse/FLINK-10230
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Client
>Reporter: Timo Walther
>Assignee: Roc Marshal
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.15.0
>
>
> FLINK-10163 added initial support for views in SQL Client. We should add a 
> command that allows for printing the query of a view for debugging. MySQL 
> offers {{SHOW CREATE VIEW}} for this. Hive generalizes this to {{SHOW CREATE 
> TABLE}}. The latter one could be extended to also show information about the 
> used table factories and properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xuyangzhong commented on pull request #17436: [FLINK-15987]SELECT 1.0e0 / 0.0e0 throws NumberFormatException

2021-10-09 Thread GitBox


xuyangzhong commented on pull request #17436:
URL: https://github.com/apache/flink/pull/17436#issuecomment-939236099


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24122) Add support to do clean in history server

2021-10-09 Thread zlzhang0122 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zlzhang0122 updated FLINK-24122:

Description: 
Now, the history server can clean history jobs by two means:
 # if users have configured 
{code:java}
historyserver.archive.clean-expired-jobs: true{code}
, then compare the files in hdfs over two clean interval and find the delete 
and clean the local cache file.

 # if users have configured the 
{code:java}
historyserver.archive.retained-jobs:{code}
a positive number, then clean the oldest files in hdfs and local.

But the retained-jobs number is difficult to determine.

For example, users may want to check the history jobs yesterday while many jobs 
failed today and exceed the retained-jobs number, then the history jobs of 
yesterday will be delete. So what if add a configuration which contain a 
retained-times that indicate the max time the history job retain?

Also it can't clean the job history which was no longer in hdfs but still 
cached in local filesystem and these files will store forever and can't be 
cleaned unless users manually do this. Maybe we can give a option and do this 
clean if the option says true.

  was:
Now, the history server can clean history jobs by two means:
 # if users have configured 
{code:java}
historyserver.archive.clean-expired-jobs: true{code}
, then compare the files in hdfs over two clean interval and find the delete 
and clean the local cache file.

 # if users have configured the 
{code:java}
historyserver.archive.retained-jobs:{code}
a positive number, then clean the oldest files in hdfs and local.

But the retained-jobs number is difficult to determine.

For example, users may want to check the history jobs yesterday while many jobs 
failed today and exceed the retained-jobs number, then the history jobs of 
yesterday will be delete. So what if add a configuration which contain a 
retained-times that indicate the max time the history job retain?


> Add support to do clean in history server
> -
>
> Key: FLINK-24122
> URL: https://issues.apache.org/jira/browse/FLINK-24122
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.12.3, 1.13.2
>Reporter: zlzhang0122
>Priority: Minor
> Fix For: 1.14.1
>
>
> Now, the history server can clean history jobs by two means:
>  # if users have configured 
> {code:java}
> historyserver.archive.clean-expired-jobs: true{code}
> , then compare the files in hdfs over two clean interval and find the delete 
> and clean the local cache file.
>  # if users have configured the 
> {code:java}
> historyserver.archive.retained-jobs:{code}
> a positive number, then clean the oldest files in hdfs and local.
> But the retained-jobs number is difficult to determine.
> For example, users may want to check the history jobs yesterday while many 
> jobs failed today and exceed the retained-jobs number, then the history jobs 
> of yesterday will be delete. So what if add a configuration which contain a 
> retained-times that indicate the max time the history job retain?
> Also it can't clean the job history which was no longer in hdfs but still 
> cached in local filesystem and these files will store forever and can't be 
> cleaned unless users manually do this. Maybe we can give a option and do this 
> clean if the option says true.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)