[jira] [Commented] (FLINK-12450) [Bitwise Functions] Add BIT_LSHIFT, BIT_RSHIFT functions supported in Table API and SQL

2024-06-11 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17853990#comment-17853990
 ] 

Ran Tao commented on FLINK-12450:
-

[~kartikeypant] thanks for pushing this issue, feel free to assign it to you if 
it's still valid. :)

> [Bitwise Functions] Add BIT_LSHIFT, BIT_RSHIFT functions supported in Table 
> API and SQL
> ---
>
> Key: FLINK-12450
> URL: https://issues.apache.org/jira/browse/FLINK-12450
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Zhanchun Zhang
>Assignee: Ran Tao
>Priority: Major
>  Labels: auto-unassigned, stale-assigned
>
> BIT_LSHIFT, Shifts a long number to the left
> BIT_RSHIFT, Shifts a long number to the right



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32522) Kafka connector should depend on commons-collections instead of inherit from flink

2023-07-04 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32522:

Summary: Kafka connector should depend on commons-collections instead of 
inherit from flink  (was: Flink sql connector kafka should include 
commons-collections in shade jar)

> Kafka connector should depend on commons-collections instead of inherit from 
> flink
> --
>
> Key: FLINK-32522
> URL: https://issues.apache.org/jira/browse/FLINK-32522
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-07-03-20-15-47-608.png, 
> image-2023-07-03-20-16-03-031.png
>
>
> Currently, externalized sql connector rely on flink main repo. but flink main 
> repo has many test cases(especially in flink-python) reference 
> flink-sql-kafka-connector.
> If we change the dependencies(e.g.  commons-collections) in flink main repo, 
> it cause exception:
> !image-2023-07-03-20-15-47-608.png!
>  
> !https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!
>  
> So must add this dependency explicitly. Otherwise, it will cause external 
> connectors block the upgrade of flink main. Connectors shouldn't rely on 
> dependencies that may or may not be
> available in Flink itself. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32522) Flink sql connector kafka should include commons-collections in shade jar

2023-07-04 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32522:

Description: 
Currently, externalized sql connector rely on flink main repo. but flink main 
repo has many test cases(especially in flink-python) reference 
flink-sql-kafka-connector.

If we change the dependencies(e.g.  commons-collections) in flink main repo, it 
cause exception:
!image-2023-07-03-20-15-47-608.png!

 

!https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!

 

So must add this dependency explicitly. Otherwise, it will cause external 
connectors block the upgrade of flink main. Connectors shouldn't rely on 
dependencies that may or may not be
available in Flink itself. 

  was:
Currently, externalized sql connector rely on flink main repo. but flink main 
repo has many test cases(especially in flink-python) reference 
flink-sql-kafka-connector.

If we change the dependencies(e.g.  commons-collections) in flink main repo, it 
cause exception:
!image-2023-07-03-20-15-47-608.png!

 

!https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!

 

So must add this dependency in shade jar. Otherwise, it will cause external 
components block the upgrade of flink main.


> Flink sql connector kafka should include commons-collections in shade jar
> -
>
> Key: FLINK-32522
> URL: https://issues.apache.org/jira/browse/FLINK-32522
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-07-03-20-15-47-608.png, 
> image-2023-07-03-20-16-03-031.png
>
>
> Currently, externalized sql connector rely on flink main repo. but flink main 
> repo has many test cases(especially in flink-python) reference 
> flink-sql-kafka-connector.
> If we change the dependencies(e.g.  commons-collections) in flink main repo, 
> it cause exception:
> !image-2023-07-03-20-15-47-608.png!
>  
> !https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!
>  
> So must add this dependency explicitly. Otherwise, it will cause external 
> connectors block the upgrade of flink main. Connectors shouldn't rely on 
> dependencies that may or may not be
> available in Flink itself. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32522) Flink sql connector kafka should include commons-collections in shade jar

2023-07-04 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32522:

Description: 
Currently, externalized sql connector rely on flink main repo. but flink main 
repo has many test cases(especially in flink-python) reference 
flink-sql-kafka-connector.

If we change the dependencies(e.g.  commons-collections) in flink main repo, it 
cause exception:
!image-2023-07-03-20-15-47-608.png!

 

!https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!

 

So must add this dependency in shade jar. Otherwise, it will cause external 
components block the upgrade of flink main.

  was:
Currently, externalized sql connector rely on flink main repo. but flink main 
repo has many test cases(especially in flink-python) reference 
flink-sql-kafka-connector.

If we change the dependencies(e.g.  commons-collections) in flink main repo, it 
cause exception:
!image-2023-07-03-20-15-47-608.png!

 

!https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!

 

We should let externalized flink connectors depend on flink main, not the other 
way around. So must add this dependency in shade jar.

Otherwise, it will cause external components block the upgrade of flink main.


> Flink sql connector kafka should include commons-collections in shade jar
> -
>
> Key: FLINK-32522
> URL: https://issues.apache.org/jira/browse/FLINK-32522
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-07-03-20-15-47-608.png, 
> image-2023-07-03-20-16-03-031.png
>
>
> Currently, externalized sql connector rely on flink main repo. but flink main 
> repo has many test cases(especially in flink-python) reference 
> flink-sql-kafka-connector.
> If we change the dependencies(e.g.  commons-collections) in flink main repo, 
> it cause exception:
> !image-2023-07-03-20-15-47-608.png!
>  
> !https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!
>  
> So must add this dependency in shade jar. Otherwise, it will cause external 
> components block the upgrade of flink main.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32522) Flink sql connector kafka should include commons-collections in shade jar

2023-07-04 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17739871#comment-17739871
 ] 

Ran Tao commented on FLINK-32522:
-

As this thread 
[https://lists.apache.org/thread/l98pc18onxrcrsb01x5kh1vppl7ymk2d] discussed.
Connectors shouldn't rely on dependencies that may or may not be
available in Flink itself.

But currently kafka connector use commons-collections from flink, we should 
depend on commons-collections and bundle it in shaded-jar.

> Flink sql connector kafka should include commons-collections in shade jar
> -
>
> Key: FLINK-32522
> URL: https://issues.apache.org/jira/browse/FLINK-32522
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-07-03-20-15-47-608.png, 
> image-2023-07-03-20-16-03-031.png
>
>
> Currently, externalized sql connector rely on flink main repo. but flink main 
> repo has many test cases(especially in flink-python) reference 
> flink-sql-kafka-connector.
> If we change the dependencies(e.g.  commons-collections) in flink main repo, 
> it cause exception:
> !image-2023-07-03-20-15-47-608.png!
>  
> !https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!
>  
> We should let externalized flink connectors depend on flink main, not the 
> other way around. So must add this dependency in shade jar.
> Otherwise, it will cause external components block the upgrade of flink main.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32522) Flink sql connector kafka should include commons-collections in shade jar

2023-07-03 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32522:

Description: 
Currently, externalized sql connector rely on flink main repo. but flink main 
repo has many test cases(especially in flink-python) reference 
flink-sql-kafka-connector.

If we change the dependencies(e.g.  commons-collections) in flink main repo, it 
cause exception:
!image-2023-07-03-20-15-47-608.png!

 

!https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!

 

We should let externalized flink connectors depend on flink main, not the other 
way around. So must add this dependency in shade jar.

Otherwise, it will cause external components block the upgrade of flink main.

  was:
Currently, externalized sql connector rely on flink main repo. but flink main 
repo has many test cases(especially in flink-python) reference 
flink-sql-kafka-connector.

If we change the dependencies(e.g.  commons-collections) in flink main repo, it 
cause exception:
!image-2023-07-03-20-15-47-608.png!

 

!https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!

 

We should let externalized flink connectors depend on flink main, not the other 
way around. So must add this dependency in shade jar. 

Otherwise, it will cause external components block the upgrade of flink main.


> Flink sql connector kafka should include commons-collections in shade jar
> -
>
> Key: FLINK-32522
> URL: https://issues.apache.org/jira/browse/FLINK-32522
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
> Attachments: image-2023-07-03-20-15-47-608.png, 
> image-2023-07-03-20-16-03-031.png
>
>
> Currently, externalized sql connector rely on flink main repo. but flink main 
> repo has many test cases(especially in flink-python) reference 
> flink-sql-kafka-connector.
> If we change the dependencies(e.g.  commons-collections) in flink main repo, 
> it cause exception:
> !image-2023-07-03-20-15-47-608.png!
>  
> !https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!
>  
> We should let externalized flink connectors depend on flink main, not the 
> other way around. So must add this dependency in shade jar.
> Otherwise, it will cause external components block the upgrade of flink main.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32522) Flink sql connector kafka should include commons-collections in shade jar

2023-07-03 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32522:

Description: 
Currently, externalized sql connector rely on flink main repo. but flink main 
repo has many test cases(especially in flink-python) reference 
flink-sql-kafka-connector.

If we change the dependencies(e.g.  commons-collections) in flink main repo, it 
cause exception:
!image-2023-07-03-20-15-47-608.png!

 

!https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!

 

We should let externalized flink connectors depend on flink main, not the other 
way around. So must add this dependency in shade jar. 

Otherwise, it will cause external components block the upgrade of flink main.

  was:
Currently, externalized sql connector rely on flink main repo. but flink main 
repo has many test cases(especially in flink-python) reference 
flink-sql-kafka-connector.

If we change the dependencies(e.g.  commons-collections) in flink main repo, it 
cause exception:
!image-2023-07-03-20-15-47-608.png!

 

We should let externalized flink connectors depend on flink main, not the other 
way around. So must add this dependency in shade jar.


> Flink sql connector kafka should include commons-collections in shade jar
> -
>
> Key: FLINK-32522
> URL: https://issues.apache.org/jira/browse/FLINK-32522
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
> Attachments: image-2023-07-03-20-15-47-608.png, 
> image-2023-07-03-20-16-03-031.png
>
>
> Currently, externalized sql connector rely on flink main repo. but flink main 
> repo has many test cases(especially in flink-python) reference 
> flink-sql-kafka-connector.
> If we change the dependencies(e.g.  commons-collections) in flink main repo, 
> it cause exception:
> !image-2023-07-03-20-15-47-608.png!
>  
> !https://user-images.githubusercontent.com/11287509/250120522-6b096a4f-83f0-4287-b7ad-d46b9371de4c.png!
>  
> We should let externalized flink connectors depend on flink main, not the 
> other way around. So must add this dependency in shade jar. 
> Otherwise, it will cause external components block the upgrade of flink main.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32522) Flink sql connector kafka should include commons-collections in shade jar

2023-07-03 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32522:

Summary: Flink sql connector kafka should include commons-collections in 
shade jar  (was: Flink sql connector kafka should include commons-collections 
in shade phase)

> Flink sql connector kafka should include commons-collections in shade jar
> -
>
> Key: FLINK-32522
> URL: https://issues.apache.org/jira/browse/FLINK-32522
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
> Attachments: image-2023-07-03-20-15-47-608.png, 
> image-2023-07-03-20-16-03-031.png
>
>
> Currently, externalized sql connector rely on flink main repo. but flink main 
> repo has many test cases(especially in flink-python) reference 
> flink-sql-kafka-connector.
> If we change the dependencies(e.g.  commons-collections) in flink main repo, 
> it cause exception:
> !image-2023-07-03-20-15-47-608.png!
>  
> We should let externalized flink connectors depend on flink main, not the 
> other way around. So must add this dependency in shade jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32522) Flink sql connector kafka should include commons-collections in shade phase

2023-07-03 Thread Ran Tao (Jira)
Ran Tao created FLINK-32522:
---

 Summary: Flink sql connector kafka should include 
commons-collections in shade phase
 Key: FLINK-32522
 URL: https://issues.apache.org/jira/browse/FLINK-32522
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Affects Versions: 1.17.1
Reporter: Ran Tao
 Attachments: image-2023-07-03-20-15-47-608.png, 
image-2023-07-03-20-16-03-031.png

Currently, externalized sql connector rely on flink main repo. but flink main 
repo has many test cases(especially in flink-python) reference 
flink-sql-kafka-connector.

If we change the dependencies(e.g.  commons-collections) in flink main repo, it 
cause exception:
!image-2023-07-03-20-15-47-608.png!

 

We should let externalized flink connectors depend on flink main, not the other 
way around. So must add this dependency in shade jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32485) Flink State Backend Changelog should support build test-jar

2023-06-29 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17738909#comment-17738909
 ] 

Ran Tao commented on FLINK-32485:
-

[~ym]  I just checked it simply, it seems there are no other module references 
it. It may be caused by some dependencies when IDEA runs test. However, IDEA 
can run successfully after I add test-jar.

> Flink State Backend Changelog should support build test-jar
> ---
>
> Key: FLINK-32485
> URL: https://issues.apache.org/jira/browse/FLINK-32485
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> In some scenarios, executing unit tests will report the following errors. In 
> fact, since flink-state-backend-changelog test contains some util classes, we 
> should build test jar like flink-rocks-db backend.
> {code:java}
> /Users/xxx/github/flink/flink-state-backends/flink-statebackend-changelog/src/test/java/org/apache/flink/state/changelog/ChangelogStateBackendTestUtils.java:29:37
> java: Package org.apache.flink.changelog.fs not exist {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32478) SourceCoordinatorAlignmentTest.testAnnounceCombinedWatermarkWithoutStart fails

2023-06-29 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17738899#comment-17738899
 ] 

Ran Tao commented on FLINK-32478:
-

[~fanrui] hi. ci got this problem again.

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50662&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=8617

> SourceCoordinatorAlignmentTest.testAnnounceCombinedWatermarkWithoutStart fails
> --
>
> Key: FLINK-32478
> URL: https://issues.apache.org/jira/browse/FLINK-32478
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.18.0, 1.16.3, 1.17.2
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.18.0, 1.16.3, 1.17.2
>
>
> SourceCoordinatorAlignmentTest.testAnnounceCombinedWatermarkWithoutStart fails
>  
> Root cause: multiple sources share the same thread pool, and the second 
> source cannot start due to the first source closes the shared thread pool.
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50611&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=8613



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32485) Flink State Backend Changelog should support build test-jar

2023-06-29 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17738593#comment-17738593
 ] 

Ran Tao commented on FLINK-32485:
-

Thanks Martijn. Hi, [~ym]. Can u help to review it?

> Flink State Backend Changelog should support build test-jar
> ---
>
> Key: FLINK-32485
> URL: https://issues.apache.org/jira/browse/FLINK-32485
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> In some scenarios, executing unit tests will report the following errors. In 
> fact, since flink-state-backend-changelog test contains some util classes, we 
> should build test jar like flink-rocks-db backend.
> {code:java}
> /Users/xxx/github/flink/flink-state-backends/flink-statebackend-changelog/src/test/java/org/apache/flink/state/changelog/ChangelogStateBackendTestUtils.java:29:37
> java: Package org.apache.flink.changelog.fs not exist {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32485) Flink State Backend Changelog should support build test-jar

2023-06-29 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17738572#comment-17738572
 ] 

Ran Tao commented on FLINK-32485:
-

cc [~martijnvisser] WDYT?

> Flink State Backend Changelog should support build test-jar
> ---
>
> Key: FLINK-32485
> URL: https://issues.apache.org/jira/browse/FLINK-32485
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> In some scenarios, executing unit tests will report the following errors. In 
> fact, since flink-state-backend-changelog test contains some util classes, we 
> should build test jar like flink-rocks-db backend.
> {code:java}
> /Users/xxx/github/flink/flink-state-backends/flink-statebackend-changelog/src/test/java/org/apache/flink/state/changelog/ChangelogStateBackendTestUtils.java:29:37
> java: Package org.apache.flink.changelog.fs not exist {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32485) Flink State Backend Changelog should support build test-jar

2023-06-29 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32485:

Description: 
In some scenarios, executing unit tests will report the following errors. In 
fact, since flink-state-backend-changelog test contains some util classes, we 
should build test jar like flink-rocks-db backend.
{code:java}
/Users/xxx/github/flink/flink-state-backends/flink-statebackend-changelog/src/test/java/org/apache/flink/state/changelog/ChangelogStateBackendTestUtils.java:29:37

java: Package org.apache.flink.changelog.fs not exist {code}

  was:
In some scenarios, executing unit tests will report the following errors. In 
fact, since flink-state-backend-changelog test contains some util classes, we 
should build test jar


{code:java}
/Users/xxx/github/flink/flink-state-backends/flink-statebackend-changelog/src/test/java/org/apache/flink/state/changelog/ChangelogStateBackendTestUtils.java:29:37

java: Package org.apache.flink.changelog.fs not exist {code}


> Flink State Backend Changelog should support build test-jar
> ---
>
> Key: FLINK-32485
> URL: https://issues.apache.org/jira/browse/FLINK-32485
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>
> In some scenarios, executing unit tests will report the following errors. In 
> fact, since flink-state-backend-changelog test contains some util classes, we 
> should build test jar like flink-rocks-db backend.
> {code:java}
> /Users/xxx/github/flink/flink-state-backends/flink-statebackend-changelog/src/test/java/org/apache/flink/state/changelog/ChangelogStateBackendTestUtils.java:29:37
> java: Package org.apache.flink.changelog.fs not exist {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32485) Flink State Backend Changelog should support build test-jar

2023-06-29 Thread Ran Tao (Jira)
Ran Tao created FLINK-32485:
---

 Summary: Flink State Backend Changelog should support build 
test-jar
 Key: FLINK-32485
 URL: https://issues.apache.org/jira/browse/FLINK-32485
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / State Backends
Affects Versions: 1.17.1
Reporter: Ran Tao


In some scenarios, executing unit tests will report the following errors. In 
fact, since flink-state-backend-changelog test contains some util classes, we 
should build test jar


{code:java}
/Users/xxx/github/flink/flink-state-backends/flink-statebackend-changelog/src/test/java/org/apache/flink/state/changelog/ChangelogStateBackendTestUtils.java:29:37

java: Package org.apache.flink.changelog.fs not exist {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32310) Support enhanced show functions syntax

2023-06-29 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17738523#comment-17738523
 ] 

Ran Tao commented on FLINK-32310:
-

[~luoyuxia] hi. yuxia. Sorry for the late reply. I have submitted this PR, the 
general logic is consistent with your show procedure, there is a little 
difference. Considering the different forms of like, I have given an 
enumeration of like to be compatible with other possible likes in the future. 
If you have time, pls help me to review it.  thanks and sorry again.

> Support enhanced show functions syntax
> --
>
> Key: FLINK-32310
> URL: https://issues.apache.org/jira/browse/FLINK-32310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> As FLIP discussed. We will support new syntax for showing functions.
> The syntax:
> | |SHOW [USER] FUNCTIONS [ ( FROM \| IN ) [catalog_name.]database_name ] [ 
> [NOT] (LIKE \| ILIKE)  ]| |



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32310) Support enhanced show functions syntax

2023-06-12 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32310:

Parent: FLINK-31256
Issue Type: Sub-task  (was: Improvement)

> Support enhanced show functions syntax
> --
>
> Key: FLINK-32310
> URL: https://issues.apache.org/jira/browse/FLINK-32310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>
> As FLIP discussed. We will support new syntax for showing functions.
> The syntax:
> | |SHOW [USER] FUNCTIONS [ ( FROM \| IN ) [catalog_name.]database_name ] [ 
> [NOT] (LIKE \| ILIKE)  ]| |



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32310) Support enhanced show functions syntax

2023-06-12 Thread Ran Tao (Jira)
Ran Tao created FLINK-32310:
---

 Summary: Support enhanced show functions syntax
 Key: FLINK-32310
 URL: https://issues.apache.org/jira/browse/FLINK-32310
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API, Table SQL / Planner
Reporter: Ran Tao


As FLIP discussed. We will support new syntax for showing functions.

The syntax:
| |SHOW [USER] FUNCTIONS [ ( FROM \| IN ) [catalog_name.]database_name ] [ 
[NOT] (LIKE \| ILIKE)  ]| |



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31256) [Umbrella] FLIP-297: Improve Auxiliary Sql Statements

2023-06-12 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17731555#comment-17731555
 ] 

Ran Tao commented on FLINK-31256:
-

[~luoyuxia]  yes. i will open a ticket for solving this. 
The current progress of this FLIP is relatively slow mainly because I can't 
find someone to help review in time.
i'm grateful you can help to review it. thanks.

> [Umbrella] FLIP-297: Improve Auxiliary Sql Statements
> -
>
> Key: FLINK-31256
> URL: https://issues.apache.org/jira/browse/FLINK-31256
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>
> The FLIP design doc can be found at page 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-297%3A+Improve+Auxiliary+Sql+Statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32044) Improve catalog name check to keep consistent about human-readable exception log in FunctionCatalog

2023-05-10 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32044:

Description: 
{code:java}
Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
 

We can do a trivial improvement to check optional#get and throw more friendly 
log to users like other list operations.  

  was:
{code:java}
Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
 

We can do a trivial improvement to check optional#get and throw more friendly 
log to users. 


> Improve catalog name check to keep consistent about human-readable exception 
> log in FunctionCatalog 
> 
>
> Key: FLINK-32044
> URL: https://issues.apache.org/jira/browse/FLINK-32044
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.17.0
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
>  
> We can do a trivial improvement to check optional#get and throw more friendly 
> log to users like other list operations.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32044) Improve catalog name check to keep consistent about human-readable exception log in FunctionCatalog

2023-05-10 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32044:

Description: 
{code:java}
Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
 

We can do an improvement to check optional#get and throw more friendly log to 
users like other list operations.  

  was:
{code:java}
Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
 

We can do a trivial improvement to check optional#get and throw more friendly 
log to users like other list operations.  


> Improve catalog name check to keep consistent about human-readable exception 
> log in FunctionCatalog 
> 
>
> Key: FLINK-32044
> URL: https://issues.apache.org/jira/browse/FLINK-32044
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.17.0
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
>  
> We can do an improvement to check optional#get and throw more friendly log to 
> users like other list operations.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32044) Improve catalog name check to keep consistent about human-readable exception log in FunctionCatalog

2023-05-10 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32044:

Description: 
{code:java}
Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
 

We can do a trivial improvement to check optional#get and throw more friendly 
log to users. 

  was:
{code:java}
Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
 
A trivial improvement to check optional#get and throw more friendly log to 
users. 


> Improve catalog name check to keep consistent about human-readable exception 
> log in FunctionCatalog 
> 
>
> Key: FLINK-32044
> URL: https://issues.apache.org/jira/browse/FLINK-32044
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.17.0
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
>  
> We can do a trivial improvement to check optional#get and throw more friendly 
> log to users. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32044) Improve catalog name check to keep consistent about human-readable exception log in FunctionCatalog

2023-05-10 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32044:

Description: 
{code:java}
Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
 
A trivial improvement to check optional#get and throw more friendly log to 
users. 

  was:
{code:java}
Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
 
-->
 
{code:java}
Catalog catalog =
catalogManager
.getCatalog(catalogName)
.orElseThrow(
() ->
new ValidationException(
String.format(
"Catalog %s not exists.", 
catalogName)));{code}
A trivial improvement to check optional#get and throw more friendly log to 
users. 


> Improve catalog name check to keep consistent about human-readable exception 
> log in FunctionCatalog 
> 
>
> Key: FLINK-32044
> URL: https://issues.apache.org/jira/browse/FLINK-32044
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.17.0
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
>  
> A trivial improvement to check optional#get and throw more friendly log to 
> users. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32044) Improve catalog name check to keep consistent about human-readable exception log in FunctionCatalog

2023-05-10 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-32044:

Summary: Improve catalog name check to keep consistent about human-readable 
exception log in FunctionCatalog   (was: Impove catalog name check to keep 
consistent about human-readable exception log in FunctionCatalog )

> Improve catalog name check to keep consistent about human-readable exception 
> log in FunctionCatalog 
> 
>
> Key: FLINK-32044
> URL: https://issues.apache.org/jira/browse/FLINK-32044
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.17.0
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
>  
> -->
>  
> {code:java}
> Catalog catalog =
> catalogManager
> .getCatalog(catalogName)
> .orElseThrow(
> () ->
> new ValidationException(
> String.format(
> "Catalog %s not exists.", 
> catalogName)));{code}
> A trivial improvement to check optional#get and throw more friendly log to 
> users. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32044) Impove catalog name check to keep consistent about human-readable exception log in FunctionCatalog

2023-05-10 Thread Ran Tao (Jira)
Ran Tao created FLINK-32044:
---

 Summary: Impove catalog name check to keep consistent about 
human-readable exception log in FunctionCatalog 
 Key: FLINK-32044
 URL: https://issues.apache.org/jira/browse/FLINK-32044
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.17.0
Reporter: Ran Tao


{code:java}
Catalog catalog = catalogManager.getCatalog(catalogName).get(); {code}
 
-->
 
{code:java}
Catalog catalog =
catalogManager
.getCatalog(catalogName)
.orElseThrow(
() ->
new ValidationException(
String.format(
"Catalog %s not exists.", 
catalogName)));{code}
A trivial improvement to check optional#get and throw more friendly log to 
users. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31824) flink sql TO_TIMESTAMP error

2023-04-18 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713525#comment-17713525
 ] 

Ran Tao commented on FLINK-31824:
-

[~leishuiyu] 

1.The date pattern with T need double-quoted such as 
"-MM-dd'T'HH:mm:ss.SSS", while Z is no need to do this.

why u try to match a 'Z' placeholder but without date zone content?

 

I think the common usage could be like:
{code:java}
Flink SQL> select TO_TIMESTAMP('2022-04-18T12:34:56.000+0800', 
'-MM-dd''T''HH:mm:ss.SSSZ'); {code}
 

> flink sql TO_TIMESTAMP error
> 
>
> Key: FLINK-31824
> URL: https://issues.apache.org/jira/browse/FLINK-31824
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.3
> Environment: the verion is 1.14.3
>Reporter: leishuiyu
>Priority: Major
> Attachments: image-2023-04-17-20-07-17-569.png
>
>
>  
>  
>  
>  
>  
>  
> !image-2023-04-17-20-07-17-569.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31760) COALESCE() with NULL arguments throws error

2023-04-10 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17710418#comment-17710418
 ] 

Ran Tao commented on FLINK-31760:
-

thanks lee for explanations. cast works well.

> COALESCE() with NULL arguments throws error
> ---
>
> Key: FLINK-31760
> URL: https://issues.apache.org/jira/browse/FLINK-31760
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
> Environment: Flink 1.16.1
>Reporter: Mohsen Rezaei
>Priority: Major
> Fix For: 1.6.4, 1.18.0, 1.17.1
>
>
> All arguments may not be nullable:
> {code}
> SELECT COALESCE(NULL, NULL)  FROM UnnamedTable$0
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 17 to line 1, column 20: Illegal 
> use of 'NULL'
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:186)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:113)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:261)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:703)
>   at CoalesceTest.main(CoalesceTest.java:58)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 17 to line 1, column 20: Illegal use of 'NULL'
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:883)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:868)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:4867)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.inferUnknownTypes(SqlValidatorImpl.java:1837)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.inferUnknownTypes(SqlValidatorImpl.java:1912)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:419)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelectList(SqlValidatorImpl.java:4061)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3347)
>   at 
> org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
>   at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
>   at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:952)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:704)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:182)
>   ... 5 more
> Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Illegal use 
> of 'NULL'
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
>   at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:560)
>   ... 21 more
> {code}
> As 
> [documented|https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/functions/systemfunctions/#conditional-functions],
>  supports all nullable argumen

[jira] [Commented] (FLINK-31760) COALESCE() with NULL arguments throws error

2023-04-10 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17710379#comment-17710379
 ] 

Ran Tao commented on FLINK-31760:
-

[~morezaei00]  Hi, Mohsen, I think it's a bug, coalesce need return null if all 
args are null. spark sql is a good example, and we can see mysql/sql server and 
other mature engines support return null.
would you like fix it?  Or i will try to fix it.

> COALESCE() with NULL arguments throws error
> ---
>
> Key: FLINK-31760
> URL: https://issues.apache.org/jira/browse/FLINK-31760
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
> Environment: Flink 1.16.1
>Reporter: Mohsen Rezaei
>Priority: Major
> Fix For: 1.6.4, 1.18.0, 1.17.1
>
>
> All arguments may not be nullable:
> {code}
> SELECT COALESCE(NULL, NULL)  FROM UnnamedTable$0
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 17 to line 1, column 20: Illegal 
> use of 'NULL'
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:186)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:113)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:261)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:703)
>   at CoalesceTest.main(CoalesceTest.java:58)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 17 to line 1, column 20: Illegal use of 'NULL'
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:883)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:868)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:4867)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.inferUnknownTypes(SqlValidatorImpl.java:1837)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.inferUnknownTypes(SqlValidatorImpl.java:1912)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:419)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelectList(SqlValidatorImpl.java:4061)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3347)
>   at 
> org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
>   at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
>   at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:952)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:704)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:182)
>   ... 5 more
> Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Illegal use 
> of 'NULL'
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
>   at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:560)

[jira] [Commented] (FLINK-31742) Replace deprecated TableSchema in flink-table-planner test

2023-04-05 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17709210#comment-17709210
 ] 

Ran Tao commented on FLINK-31742:
-

[~icshuo] Yes. i changed the issue name. we can replace these old usages for 
next or 2.x released version. WDYT?

> Replace deprecated TableSchema in flink-table-planner test
> --
>
> Key: FLINK-31742
> URL: https://issues.apache.org/jira/browse/FLINK-31742
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>
> we can try to remove deprecated TableSchema and use Schema & ResolvedSchema 
> to replace it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31742) Replace deprecated TableSchema in flink-table-planner test

2023-04-05 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31742:

Description: we can try to remove deprecated TableSchema and use Schema & 
ResolvedSchema to replace it.  (was: Try to remove deprecated TableSchema and 
use Schema & ResolvedSchema to replace it.)

> Replace deprecated TableSchema in flink-table-planner test
> --
>
> Key: FLINK-31742
> URL: https://issues.apache.org/jira/browse/FLINK-31742
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>
> we can try to remove deprecated TableSchema and use Schema & ResolvedSchema 
> to replace it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31742) Replace deprecated TableSchema in flink-table-planner test

2023-04-05 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31742:

Summary: Replace deprecated TableSchema in flink-table-planner test  (was: 
Remove deprecated TableSchema in flink-table-planner test)

> Replace deprecated TableSchema in flink-table-planner test
> --
>
> Key: FLINK-31742
> URL: https://issues.apache.org/jira/browse/FLINK-31742
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner
>Affects Versions: 1.17.1
>Reporter: Ran Tao
>Priority: Major
>
> Try to remove deprecated TableSchema and use Schema & ResolvedSchema to 
> replace it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31742) Remove deprecated TableSchema in flink-table-planner test

2023-04-05 Thread Ran Tao (Jira)
Ran Tao created FLINK-31742:
---

 Summary: Remove deprecated TableSchema in flink-table-planner test
 Key: FLINK-31742
 URL: https://issues.apache.org/jira/browse/FLINK-31742
 Project: Flink
  Issue Type: Technical Debt
  Components: Table SQL / Planner
Affects Versions: 1.17.1
Reporter: Ran Tao


Try to remove deprecated TableSchema and use Schema & ResolvedSchema to replace 
it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31710) Remove and rely on the curator dependency that's provided by flink

2023-04-03 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17708191#comment-17708191
 ] 

Ran Tao commented on FLINK-31710:
-

got it. thanks for your explanations.

> Remove  and rely on the curator dependency that's provided 
> by flink
> -
>
> Key: FLINK-31710
> URL: https://issues.apache.org/jira/browse/FLINK-31710
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Matthias Pohl
>Assignee: Ran Tao
>Priority: Major
>
> Currently, we're relying on a dedicated curator dependency in tests to use 
> the {{TestingZooKeeperServer}} (see [Flink's parent 
> pom|https://github.com/apache/flink/blob/97cff0768d05e4a7d0217ddc92fd9ea3c7fae2c2/pom.xml#L143]).
>  Besides that, we're using {{flink-shaded}} to provide the zookeeper and 
> curator dependency that is used in Flink's production code.
> The flaw of that approach is that we have to maintain two curator versions. 
> This Jira issue is about investigating whether we could just remove the 
> curator test dependency and rely on the {{flink-shaded}} curator sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31710) Remove and rely on the curator dependency that's provided by flink

2023-04-03 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17707969#comment-17707969
 ] 

Ran Tao edited comment on FLINK-31710 at 4/3/23 1:36 PM:
-

[~mapohl] Hi, Matthias. In fact, we have encountered other similar double 
references in related projects of the flink ecology in our internal flink 
version, such as guava and flink-shaded-guava and hadoop and 
flink-shaded-hadoop. It is a good practice to constrain the use of the shaded 
way. I'd be happy to work on this issue. Can u assign this ticket to me? (If we 
decide to use shaded-curator)


was (Author: lemonjing):
[~mapohl] Hi, Matthias. In fact, we have encountered other similar double 
references in related projects of the flink ecology in our internal flink 
version, such as guava and flink-shaded-guava and hadoop and 
flink-shaded-hadoop. It is a good practice to constrain the use of the shaded 
scheme. I'd be happy to work on this issue. Can u assign this ticket to me? (If 
we decide to use shaded-curator)

> Remove  and rely on the curator dependency that's provided 
> by flink
> -
>
> Key: FLINK-31710
> URL: https://issues.apache.org/jira/browse/FLINK-31710
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Matthias Pohl
>Priority: Major
>
> Currently, we're relying on a dedicated curator dependency in tests to use 
> the {{TestingZooKeeperServer}} (see [Flink's parent 
> pom|https://github.com/apache/flink/blob/97cff0768d05e4a7d0217ddc92fd9ea3c7fae2c2/pom.xml#L143]).
>  Besides that, we're using {{flink-shaded}} to provide the zookeeper and 
> curator dependency that is used in Flink's production code.
> The flaw of that approach is that we have to maintain two curator versions. 
> This Jira issue is about investigating whether we could just remove the 
> curator test dependency and rely on the {{flink-shaded}} curator sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31710) Remove and rely on the curator dependency that's provided by flink

2023-04-03 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17707969#comment-17707969
 ] 

Ran Tao edited comment on FLINK-31710 at 4/3/23 1:35 PM:
-

[~mapohl] Hi, Matthias. In fact, we have encountered other similar double 
references in related projects of the flink ecology in our internal flink 
version, such as guava and flink-shaded-guava and hadoop and 
flink-shaded-hadoop. It is a good practice to constrain the use of the shaded 
scheme. I'd be happy to work on this issue. Can u assign this ticket to me? (If 
we decide to use shaded-curator)


was (Author: lemonjing):
[~mapohl] Hi, Matthias. In fact, we have encountered other similar double 
references in related projects of the flink ecology, such as guava and 
flink-shaded-guava and hadoop and flink-shaded-hadoop. It is a good practice to 
constrain the use of the shaded scheme. I'd be happy to work on this issue. Can 
u assign this ticket to me? (If we decide to use shaded-curator)

> Remove  and rely on the curator dependency that's provided 
> by flink
> -
>
> Key: FLINK-31710
> URL: https://issues.apache.org/jira/browse/FLINK-31710
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Matthias Pohl
>Priority: Major
>
> Currently, we're relying on a dedicated curator dependency in tests to use 
> the {{TestingZooKeeperServer}} (see [Flink's parent 
> pom|https://github.com/apache/flink/blob/97cff0768d05e4a7d0217ddc92fd9ea3c7fae2c2/pom.xml#L143]).
>  Besides that, we're using {{flink-shaded}} to provide the zookeeper and 
> curator dependency that is used in Flink's production code.
> The flaw of that approach is that we have to maintain two curator versions. 
> This Jira issue is about investigating whether we could just remove the 
> curator test dependency and rely on the {{flink-shaded}} curator sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31710) Remove and rely on the curator dependency that's provided by flink

2023-04-03 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17707969#comment-17707969
 ] 

Ran Tao edited comment on FLINK-31710 at 4/3/23 1:33 PM:
-

[~mapohl] Hi, Matthias. In fact, we have encountered other similar double 
references in related projects of the flink ecology, such as guava and 
flink-shaded-guava and hadoop and flink-shaded-hadoop. It is a good practice to 
constrain the use of the shaded scheme. I'd be happy to work on this issue. Can 
u assign this ticket to me? (If we decide to use shaded-curator)


was (Author: lemonjing):
In fact, we have encountered other similar double references in related 
projects of the flink ecology, such as guava and flink-shaded-guava and hadoop 
and flink-shaded-hadoop. It is a good practice to constrain the use of the 
shaded scheme. I'd be happy to work on this issue. Can u assign this ticket to 
me? (If we decide to use shaded-curator)

> Remove  and rely on the curator dependency that's provided 
> by flink
> -
>
> Key: FLINK-31710
> URL: https://issues.apache.org/jira/browse/FLINK-31710
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Matthias Pohl
>Priority: Major
>
> Currently, we're relying on a dedicated curator dependency in tests to use 
> the {{TestingZooKeeperServer}} (see [Flink's parent 
> pom|https://github.com/apache/flink/blob/97cff0768d05e4a7d0217ddc92fd9ea3c7fae2c2/pom.xml#L143]).
>  Besides that, we're using {{flink-shaded}} to provide the zookeeper and 
> curator dependency that is used in Flink's production code.
> The flaw of that approach is that we have to maintain two curator versions. 
> This Jira issue is about investigating whether we could just remove the 
> curator test dependency and rely on the {{flink-shaded}} curator sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31710) Remove and rely on the curator dependency that's provided by flink

2023-04-03 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17707969#comment-17707969
 ] 

Ran Tao commented on FLINK-31710:
-

In fact, we have encountered other similar double references in related 
projects of the flink ecology, such as guava and flink-shaded-guava and hadoop 
and flink-shaded-hadoop. It is a good practice to constrain the use of the 
shaded scheme. I'd be happy to work on this issue. Can u assign this ticket to 
me? (If we decide to use shaded-curator)

> Remove  and rely on the curator dependency that's provided 
> by flink
> -
>
> Key: FLINK-31710
> URL: https://issues.apache.org/jira/browse/FLINK-31710
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Matthias Pohl
>Priority: Major
>
> Currently, we're relying on a dedicated curator dependency in tests to use 
> the {{TestingZooKeeperServer}} (see [Flink's parent 
> pom|https://github.com/apache/flink/blob/97cff0768d05e4a7d0217ddc92fd9ea3c7fae2c2/pom.xml#L143]).
>  Besides that, we're using {{flink-shaded}} to provide the zookeeper and 
> curator dependency that is used in Flink's production code.
> The flaw of that approach is that we have to maintain two curator versions. 
> This Jira issue is about investigating whether we could just remove the 
> curator test dependency and rely on the {{flink-shaded}} curator sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2023-03-30 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Description: 
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets.

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.

*Note:* many calcite-releated classes such as flink hive parser(Most are 
located in flink-connector-hive) use assert many places. we will not fix these 
classes. 
we need to keep these classes consistent with the assert used in calcite.  We 
just fix flink classes.

  was:
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets.

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.

Note: many calcite classes such as flink hive parser use assert many places. we 
will not fix these classes. we just fix flink classes.


> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
>  

[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2023-03-30 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Description: 
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets.

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.

Note: many calcite classes such as flink hive parser use assert many places. we 
will not fix these classes. we just fix flink classes.

  was:
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets (by using idea, we can find other 
places ).

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.


> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (JSON, 

[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2023-03-30 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Description: 
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets (by using idea, we can find other 
places ).

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.

  was:
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets (by using idea, we can find other 
places ).

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.

Note: many calcite classes such as flink hive parser use assert many places. we 
will not fix these classes. we just fix flink classes.


> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Componen

[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2023-03-30 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Description: 
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets (by using idea, we can find other 
places ).

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.

Note: many calcite classes such as flink hive parser use assert many places. we 
will not fix these classes. we just fix flink classes.

  was:
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets (by using idea, we can find other 
places ).

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.

Note: many calcite classes use assert. we will not fix these classes. we just 
fix flink classes.


> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/bro

[jira] [Commented] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2023-03-30 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17706805#comment-17706805
 ] 

Ran Tao commented on FLINK-30259:
-

Hi [~chesnay] and [~twalthr]  because currently some modules such as 
flink-table/table-planner are doing some efforts like technical debt or 
removing of deprecated code to meet the possible and new 2.0 version.  Can we 
do a clean about this assert usage. WDYT? 

> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile), Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Priority: Major
>
> The code of some modules of the current Flink project uses the 'assert' 
> keyword of java to do checking, which actually depends on the enablement of 
> the -enableassertions (-ea) option (default is false, which means some assert 
> code can not work), otherwise it may lead to unexpected behavior. In fact, 
> flink already has a mature Preconditions tool, we can use it to replace 
> 'assert' keyword. it is more clean and consistent with flink.
> The following is an example of some snippets (by using idea, we can find 
> other places ).
> RowDataPrintFunction
> {code:java}
> @Override
> public void invoke(RowData value, Context context) {
> Object data = converter.toExternal(value);
> assert data != null;
> writer.write(data.toString());
> }
> {code}
> e.g. if assert not enable,data.toString() will cause NPE.
> KubernetesUtils
> {code:java}
> public static KubernetesConfigMap checkConfigMaps(
> List configMaps, String 
> expectedConfigMapName) {
> assert (configMaps.size() == 1);
> assert (configMaps.get(0).getName().equals(expectedConfigMapName));
> return configMaps.get(0);
> }
> {code}
> e.g. if assert not enable,configMaps.get(0)will cause NPE.
> RocksDBOperationUtils
> {code:java}
> if (memoryConfig.isUsingFixedMemoryPerSlot()) {
> assert memoryConfig.getFixedMemoryPerSlot() != null;
> logger.info("Getting fixed-size shared cache for RocksDB.");
> return memoryManager.getExternalSharedMemoryResource(
> FIXED_SLOT_MEMORY_RESOURCE_ID,
> allocator,
> // if assert not enable,  here will cause NPE.
> memoryConfig.getFixedMemoryPerSlot().getBytes());
> } else {
> logger.info("Getting managed memory shared cache for 
> RocksDB.");
> return memoryManager.getSharedMemoryResourceForManagedMemory(
> MANAGED_MEMORY_RESOURCE_ID, allocator, 
> memoryFraction);
> }
> {code}
> e.g. if assert not enable, 
> RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
> cause NPE.
> Note: many calcite classes use assert. we will not fix these classes. we just 
> fix flink classes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2023-03-30 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Description: 
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets (by using idea, we can find other 
places ).

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.

Note: many calcite classes use assert. we will not fix these classes. we just 
fix flink classes.

  was:
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets (by using idea, we can find other 
places ).

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.

Note: many calcite classes use assert. we will not fix this. we just fix flink 
classes.


> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
> 

[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2023-03-30 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Description: 
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets (by using idea, we can find other 
places ).

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}
e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes()) will 
cause NPE.

Note: many calcite classes use assert. we will not fix this. we just fix flink 
classes.

  was:
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets (by using idea, we can find other 
places ). 

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}

e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes())  will 
cause NPE.








> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (

[jira] (FLINK-12449) [Bitwise Functions] Add BIT_AND, BIT_OR functions supported in Table API and SQL

2023-03-28 Thread Ran Tao (Jira)


[ https://issues.apache.org/jira/browse/FLINK-12449 ]


Ran Tao deleted comment on FLINK-12449:
-

was (Author: lemonjing):
[~lzljs3620320] hi,Jingsong, can you assign this ticket to me? i'm glad to 
support it.

> [Bitwise Functions] Add BIT_AND,  BIT_OR functions supported in Table API and 
> SQL
> -
>
> Key: FLINK-12449
> URL: https://issues.apache.org/jira/browse/FLINK-12449
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Zhanchun Zhang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
> Attachments: image-2023-02-26-11-55-47-406.png, 
> image-2023-02-26-11-56-21-005.png
>
>
> Bitwise AND.
> eg. SELECT BIT_AND(29,15), returns 13 
> Bitwise OR
> eg. SELECT BIT_OR(29 ,15), returns 31



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31281) PythonFunctionRunner doesn't extend AutoCloseable but implements close

2023-03-27 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao closed FLINK-31281.
---
Resolution: Not A Problem

> PythonFunctionRunner doesn't extend AutoCloseable but implements close
> --
>
> Key: FLINK-31281
> URL: https://issues.apache.org/jira/browse/FLINK-31281
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Ran Tao
>Priority: Major
>
> The PythonFunctionRunner provides a {{close}} method (see 
> [PythonFunctionRunner|https://github.com/apache/flink/blob/0612a997ddcc791ee54f500fbf1299ce04987679/flink-python/src/main/java/org/apache/flink/python/PythonFunctionRunner.java])
>  but doesn't implement {{{}AutoCloseable{}}}. However {{AutoCloseable}} would 
> enable us to use Java's try-with-resources feature and more generic utility 
> classes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31281) PythonFunctionRunner doesn't extend AutoCloseable but implements close

2023-03-27 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705762#comment-17705762
 ] 

Ran Tao commented on FLINK-31281:
-

got it. i will close it.[~mapohl] 

> PythonFunctionRunner doesn't extend AutoCloseable but implements close
> --
>
> Key: FLINK-31281
> URL: https://issues.apache.org/jira/browse/FLINK-31281
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Ran Tao
>Priority: Major
>
> The PythonFunctionRunner provides a {{close}} method (see 
> [PythonFunctionRunner|https://github.com/apache/flink/blob/0612a997ddcc791ee54f500fbf1299ce04987679/flink-python/src/main/java/org/apache/flink/python/PythonFunctionRunner.java])
>  but doesn't implement {{{}AutoCloseable{}}}. However {{AutoCloseable}} would 
> enable us to use Java's try-with-resources feature and more generic utility 
> classes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31618) Broken links in docs for Pulsar connector

2023-03-26 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31618:

Affects Version/s: 1.17.0

> Broken links in docs for Pulsar connector 
> --
>
> Key: FLINK-31618
> URL: https://issues.apache.org/jira/browse/FLINK-31618
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.17.0
>Reporter: Ran Tao
>Priority: Major
>
> [https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/pulsar/]
>  
> some compress types has error 404 links.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31618) Broken links in docs for Pulsar connector

2023-03-26 Thread Ran Tao (Jira)
Ran Tao created FLINK-31618:
---

 Summary: Broken links in docs for Pulsar connector 
 Key: FLINK-31618
 URL: https://issues.apache.org/jira/browse/FLINK-31618
 Project: Flink
  Issue Type: Improvement
Reporter: Ran Tao


[https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/pulsar/]

 

some compress types has error 404 links.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31618) Broken links in docs for Pulsar connector

2023-03-26 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31618:

Component/s: Documentation

> Broken links in docs for Pulsar connector 
> --
>
> Key: FLINK-31618
> URL: https://issues.apache.org/jira/browse/FLINK-31618
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.17.0
>Reporter: Ran Tao
>Priority: Major
>
> [https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/pulsar/]
>  
> some compress types has error 404 links.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31006) job is not finished when using pipeline mode to run bounded source like kafka/pulsar

2023-03-24 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17704869#comment-17704869
 ] 

Ran Tao commented on FLINK-31006:
-

[~tzulitai] Not the same problem, this issue is the problem that noMoreSplit 
was not reset during jm fo. [~jackylau] because kafka in flink main repo is 
code freezing, could you close this PR and create a new one in the 
flink-connector-kafka repo if you reproduce it. WDYT?

> job is not finished when using pipeline mode to run bounded source like 
> kafka/pulsar
> 
>
> Key: FLINK-31006
> URL: https://issues.apache.org/jira/browse/FLINK-31006
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: image-2023-02-10-13-20-52-890.png, 
> image-2023-02-10-13-23-38-430.png, image-2023-02-10-13-24-46-929.png, 
> image-2023-03-04-01-04-18-658.png, image-2023-03-04-01-05-25-335.png, 
> image-2023-03-04-01-07-04-927.png, image-2023-03-04-01-07-36-168.png, 
> image-2023-03-04-01-08-29-042.png, image-2023-03-04-01-09-24-199.png
>
>
> when i do failover works like kill jm/tm when using  pipeline mode to run 
> bounded source like kafka, i found job is not finished, when every partition 
> data has consumed.
>  
> After dig into code, i found this logical not run when JM recover. the 
> partition infos are not changed. so noMoreNewPartitionSplits is not set to 
> true. then this will not run 
>  
> !image-2023-02-10-13-23-38-430.png!
>  
> !image-2023-02-10-13-24-46-929.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31596) Cleanup usage of deprecated methods in TableEnvironment

2023-03-23 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17704441#comment-17704441
 ] 

Ran Tao commented on FLINK-31596:
-

hi, jark. it's a good improvement to remove deprecated methods in TableEnv. 
found other deprecated places int TableEnvironment.

getCompletionHints
scan (can be replaced by from methods) 

> Cleanup usage of deprecated methods in TableEnvironment
> ---
>
> Key: FLINK-31596
> URL: https://issues.apache.org/jira/browse/FLINK-31596
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
>
> This is a preparation to remove the deprecated methods in Table API for Flink 
> v2.0. Currently, the deprecated methods of TableEnvironment and 
> StreamTableEnvironment are still used in many places. This is an umbrella 
> issue to clean up the usage. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-30935) Add KafkaSerializer deserialize check when using SimpleVersionedSerializer

2023-03-22 Thread Ran Tao (Jira)


[ https://issues.apache.org/jira/browse/FLINK-30935 ]


Ran Tao deleted comment on FLINK-30935:
-

was (Author: lemonjing):
@[~mapohl]  [~Leonard] Can u help me to review this PR ? thanks.

> Add KafkaSerializer deserialize check when using SimpleVersionedSerializer
> --
>
> Key: FLINK-30935
> URL: https://issues.apache.org/jira/browse/FLINK-30935
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> Current kafka many implemented serializers do not deal with version check 
> while other implementations of SimpleVersionedSerializer supports it.
> we can add it like many other connectors's implementation in case of 
> incompatible or corrupt state when restoring from checkpoint.
>  
> {code:java}
> @Override
> public int getVersion() {
> return CURRENT_VERSION;
> }
> @Override
> public KafkaPartitionSplit deserialize(int version, byte[] serialized) throws 
> IOException {
> try (ByteArrayInputStream bais = new ByteArrayInputStream(serialized);
> DataInputStream in = new DataInputStream(bais)) {
> String topic = in.readUTF();
> int partition = in.readInt();
> long offset = in.readLong();
> long stoppingOffset = in.readLong();
> return new KafkaPartitionSplit(
> new TopicPartition(topic, partition), offset, stoppingOffset);
> }
> } {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-30935) Add KafkaSerializer deserialize check when using SimpleVersionedSerializer

2023-03-22 Thread Ran Tao (Jira)


[ https://issues.apache.org/jira/browse/FLINK-30935 ]


Ran Tao deleted comment on FLINK-30935:
-

was (Author: lemonjing):
Hi [~chesnay] [~becket_qin] what do u think?

> Add KafkaSerializer deserialize check when using SimpleVersionedSerializer
> --
>
> Key: FLINK-30935
> URL: https://issues.apache.org/jira/browse/FLINK-30935
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> Current kafka many implemented serializers do not deal with version check 
> while other implementations of SimpleVersionedSerializer supports it.
> we can add it like many other connectors's implementation in case of 
> incompatible or corrupt state when restoring from checkpoint.
>  
> {code:java}
> @Override
> public int getVersion() {
> return CURRENT_VERSION;
> }
> @Override
> public KafkaPartitionSplit deserialize(int version, byte[] serialized) throws 
> IOException {
> try (ByteArrayInputStream bais = new ByteArrayInputStream(serialized);
> DataInputStream in = new DataInputStream(bais)) {
> String topic = in.readUTF();
> int partition = in.readInt();
> long offset = in.readLong();
> long stoppingOffset = in.readLong();
> return new KafkaPartitionSplit(
> new TopicPartition(topic, partition), offset, stoppingOffset);
> }
> } {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30935) Add KafkaSerializer deserialize check when using SimpleVersionedSerializer

2023-03-22 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30935:

Description: 
Current kafka many implemented serializers do not deal with version check while 
other implementations of SimpleVersionedSerializer supports it.

we can add it like many other connectors's implementation in case of 
incompatible or corrupt state when restoring from checkpoint.

 
{code:java}
@Override
public int getVersion() {
return CURRENT_VERSION;
}

@Override
public KafkaPartitionSplit deserialize(int version, byte[] serialized) throws 
IOException {
try (ByteArrayInputStream bais = new ByteArrayInputStream(serialized);
DataInputStream in = new DataInputStream(bais)) {
String topic = in.readUTF();
int partition = in.readInt();
long offset = in.readLong();
long stoppingOffset = in.readLong();
return new KafkaPartitionSplit(
new TopicPartition(topic, partition), offset, stoppingOffset);
}
} {code}
 

 

  was:
{code:java}
@Override
public int getVersion() {
return CURRENT_VERSION;
}

@Override
public KafkaPartitionSplit deserialize(int version, byte[] serialized) throws 
IOException {
try (ByteArrayInputStream bais = new ByteArrayInputStream(serialized);
DataInputStream in = new DataInputStream(bais)) {
String topic = in.readUTF();
int partition = in.readInt();
long offset = in.readLong();
long stoppingOffset = in.readLong();
return new KafkaPartitionSplit(
new TopicPartition(topic, partition), offset, stoppingOffset);
}
} {code}
Current kafka many implemented serializers do not deal with version check. I 
think we can add it like many other connectors's implementation in case of 
incompatible or corrupt state when restoring from checkpoint.


> Add KafkaSerializer deserialize check when using SimpleVersionedSerializer
> --
>
> Key: FLINK-30935
> URL: https://issues.apache.org/jira/browse/FLINK-30935
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> Current kafka many implemented serializers do not deal with version check 
> while other implementations of SimpleVersionedSerializer supports it.
> we can add it like many other connectors's implementation in case of 
> incompatible or corrupt state when restoring from checkpoint.
>  
> {code:java}
> @Override
> public int getVersion() {
> return CURRENT_VERSION;
> }
> @Override
> public KafkaPartitionSplit deserialize(int version, byte[] serialized) throws 
> IOException {
> try (ByteArrayInputStream bais = new ByteArrayInputStream(serialized);
> DataInputStream in = new DataInputStream(bais)) {
> String topic = in.readUTF();
> int partition = in.readInt();
> long offset = in.readLong();
> long stoppingOffset = in.readLong();
> return new KafkaPartitionSplit(
> new TopicPartition(topic, partition), offset, stoppingOffset);
> }
> } {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31452) Show databases should better return sorted results

2023-03-22 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17703676#comment-17703676
 ] 

Ran Tao commented on FLINK-31452:
-

[~jark] many mature engines such as mysql/pg support sorted databases. what do 
you think? 

> Show databases should better return sorted results
> --
>
> Key: FLINK-31452
> URL: https://issues.apache.org/jira/browse/FLINK-31452
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> flink show databases not support sorted function. I think we can add support 
> for these operations. So when returned results are large, user can locate 
> result easily. we can see mature engines such as mysql did support it.
> Flink SQL> show databases;
> +--+
> |   database name|
> +--+
> |default_database|
> |            test|
> |             sys|
> |           ca_db|
> +--+
> e.g. mysql
> ++
> |Database          |
> ++
> |information_schema|
> |mysql              |
> |performance_schema|
> |sys                |
> |test              |
> |test_db            |
> e.g.  pg:
>     Name    |  Owner   | Encoding |   Collate   
> +--+--+-
>  pgbench    | postgres | UTF8     | en_US.UTF-8 
>  postgres   | postgres | UTF8     | en_US.UTF-8 
>  slonmaster | postgres | UTF8     | en_US.UTF-8 
>  slonslave  | postgres | UTF8     | en_US.UTF-8 
>  template0  | postgres | UTF8     | en_US.UTF-8 
>  template1  | postgres | UTF8     | en_US.UTF-8 
>  test       | postgres | UTF8     | en_US.UTF-8 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31452) Show databases should better return sorted results

2023-03-22 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31452:

Description: 
flink show databases not support sorted function. I think we can add support 
for these operations. So when returned results are large, user can locate 
result easily. we can see mature engines such as mysql did support it.

Flink SQL> show databases;
+--+
|   database name|

+--+
|default_database|
|            test|
|             sys|
|           ca_db|

+--+

e.g. mysql

++
|Database          |

++
|information_schema|
|mysql              |
|performance_schema|
|sys                |
|test              |
|test_db            |

e.g.  pg:

    Name    |  Owner   | Encoding |   Collate   
+--+--+-
 pgbench    | postgres | UTF8     | en_US.UTF-8 
 postgres   | postgres | UTF8     | en_US.UTF-8 
 slonmaster | postgres | UTF8     | en_US.UTF-8 
 slonslave  | postgres | UTF8     | en_US.UTF-8 
 template0  | postgres | UTF8     | en_US.UTF-8 
 template1  | postgres | UTF8     | en_US.UTF-8 
 test       | postgres | UTF8     | en_US.UTF-8 

 

  was:
flink show databases not support sorted function. I think we can add support 
for these operations. So when returned results are large, user can locate 
result easily. we can see mature engines such as mysql did support it.

Flink SQL> show databases;
+--+
|   database name|

+--+
|default_database|
|            test|
|             sys|
|           ca_db|

+--+

e.g. mysql

++
|Database          |

++
|information_schema|
|mysql              |
|performance_schema|
|sys                |
|test              |
|test_db            |

++
6 rows in set (0.01 sec)

 


> Show databases should better return sorted results
> --
>
> Key: FLINK-31452
> URL: https://issues.apache.org/jira/browse/FLINK-31452
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> flink show databases not support sorted function. I think we can add support 
> for these operations. So when returned results are large, user can locate 
> result easily. we can see mature engines such as mysql did support it.
> Flink SQL> show databases;
> +--+
> |   database name|
> +--+
> |default_database|
> |            test|
> |             sys|
> |           ca_db|
> +--+
> e.g. mysql
> ++
> |Database          |
> ++
> |information_schema|
> |mysql              |
> |performance_schema|
> |sys                |
> |test              |
> |test_db            |
> e.g.  pg:
>     Name    |  Owner   | Encoding |   Collate   
> +--+--+-
>  pgbench    | postgres | UTF8     | en_US.UTF-8 
>  postgres   | postgres | UTF8     | en_US.UTF-8 
>  slonmaster | postgres | UTF8     | en_US.UTF-8 
>  slonslave  | postgres | UTF8     | en_US.UTF-8 
>  template0  | postgres | UTF8     | en_US.UTF-8 
>  template1  | postgres | UTF8     | en_US.UTF-8 
>  test       | postgres | UTF8     | en_US.UTF-8 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31452) Show databases should better return sorted results

2023-03-22 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31452:

Description: 
flink show databases not support sorted function. I think we can add support 
for these operations. So when returned results are large, user can locate 
result easily. we can see mature engines such as mysql did support it.

Flink SQL> show databases;
+--+
|   database name|

+--+
|default_database|
|            test|
|             sys|
|           ca_db|

+--+

e.g. mysql

++
|Database          |

++
|information_schema|
|mysql              |
|performance_schema|
|sys                |
|test              |
|test_db            |

++
6 rows in set (0.01 sec)

 

  was:
Currently TableEnv support return sorted views and tables.

But other operations such as show databases without sorted function. I think we 
can add support for these operations. So when returned results are large, user 
can locate result easily.

Flink SQL> show databases;
+--+
|    database name |
+--+
| default_database |
|             test |
|              sys |
|            ca_db |
+--+

e.g. mysql

++
| Database           |
++
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| test               |
| test_db            |
++
6 rows in set (0.01 sec)

 


> Show databases should better return sorted results
> --
>
> Key: FLINK-31452
> URL: https://issues.apache.org/jira/browse/FLINK-31452
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>
> flink show databases not support sorted function. I think we can add support 
> for these operations. So when returned results are large, user can locate 
> result easily. we can see mature engines such as mysql did support it.
> Flink SQL> show databases;
> +--+
> |   database name|
> +--+
> |default_database|
> |            test|
> |             sys|
> |           ca_db|
> +--+
> e.g. mysql
> ++
> |Database          |
> ++
> |information_schema|
> |mysql              |
> |performance_schema|
> |sys                |
> |test              |
> |test_db            |
> ++
> 6 rows in set (0.01 sec)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31452) Show databases should better return sorted results

2023-03-22 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31452:

Parent: (was: FLINK-31256)
Issue Type: Improvement  (was: Sub-task)

> Show databases should better return sorted results
> --
>
> Key: FLINK-31452
> URL: https://issues.apache.org/jira/browse/FLINK-31452
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>
> Currently TableEnv support return sorted views and tables.
> But other operations such as show databases without sorted function. I think 
> we can add support for these operations. So when returned results are large, 
> user can locate result easily.
> Flink SQL> show databases;
> +--+
> |    database name |
> +--+
> | default_database |
> |             test |
> |              sys |
> |            ca_db |
> +--+
> e.g. mysql
> ++
> | Database           |
> ++
> | information_schema |
> | mysql              |
> | performance_schema |
> | sys                |
> | test               |
> | test_db            |
> ++
> 6 rows in set (0.01 sec)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31452) Show databases should better return sorted results

2023-03-22 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31452:

Priority: Major  (was: Minor)

> Show databases should better return sorted results
> --
>
> Key: FLINK-31452
> URL: https://issues.apache.org/jira/browse/FLINK-31452
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>
> Currently TableEnv support return sorted views and tables.
> But other operations such as show databases without sorted function. I think 
> we can add support for these operations. So when returned results are large, 
> user can locate result easily.
> Flink SQL> show databases;
> +--+
> |    database name |
> +--+
> | default_database |
> |             test |
> |              sys |
> |            ca_db |
> +--+
> e.g. mysql
> ++
> | Database           |
> ++
> | information_schema |
> | mysql              |
> | performance_schema |
> | sys                |
> | test               |
> | test_db            |
> ++
> 6 rows in set (0.01 sec)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31452) Show databases should better return sorted results

2023-03-22 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31452:

Labels: table  (was: )

> Show databases should better return sorted results
> --
>
> Key: FLINK-31452
> URL: https://issues.apache.org/jira/browse/FLINK-31452
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>  Labels: table
>
> Currently TableEnv support return sorted views and tables.
> But other operations such as show databases without sorted function. I think 
> we can add support for these operations. So when returned results are large, 
> user can locate result easily.
> Flink SQL> show databases;
> +--+
> |    database name |
> +--+
> | default_database |
> |             test |
> |              sys |
> |            ca_db |
> +--+
> e.g. mysql
> ++
> | Database           |
> ++
> | information_schema |
> | mysql              |
> | performance_schema |
> | sys                |
> | test               |
> | test_db            |
> ++
> 6 rows in set (0.01 sec)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31452) Show databases should better return sorted results

2023-03-22 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31452:

Labels:   (was: table)

> Show databases should better return sorted results
> --
>
> Key: FLINK-31452
> URL: https://issues.apache.org/jira/browse/FLINK-31452
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>
> Currently TableEnv support return sorted views and tables.
> But other operations such as show databases without sorted function. I think 
> we can add support for these operations. So when returned results are large, 
> user can locate result easily.
> Flink SQL> show databases;
> +--+
> |    database name |
> +--+
> | default_database |
> |             test |
> |              sys |
> |            ca_db |
> +--+
> e.g. mysql
> ++
> | Database           |
> ++
> | information_schema |
> | mysql              |
> | performance_schema |
> | sys                |
> | test               |
> | test_db            |
> ++
> 6 rows in set (0.01 sec)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31452) Show databases should better return sorted results

2023-03-22 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31452:

Description: 
Currently TableEnv support return sorted views and tables.

But other operations such as show databases without sorted function. I think we 
can add support for these operations. So when returned results are large, user 
can locate result easily.

Flink SQL> show databases;
+--+
|    database name |
+--+
| default_database |
|             test |
|              sys |
|            ca_db |
+--+

e.g. mysql

++
| Database           |
++
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| test               |
| test_db            |
++
6 rows in set (0.01 sec)

 

  was:
Currently TableEnv support return sorted views and tables.

 
{code:java}
@Override
public String[] listTables() {
return catalogManager.listTables().stream().sorted().toArray(String[]::new);
}

@Override
public String[] listViews() {
return catalogManager.listViews().stream().sorted().toArray(String[]::new);
} {code}
But other operations such as show databases without sorted function. I think we 
can add support for these operations. So when returned results are large, user 
can locate result easily.
{code:java}
@Override
public String[] listDatabases() {
return catalogManager
.getCatalog(catalogManager.getCurrentCatalog())
.get()
.listDatabases()
.toArray(new String[0]);
} {code}


> Show databases should better return sorted results
> --
>
> Key: FLINK-31452
> URL: https://issues.apache.org/jira/browse/FLINK-31452
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Minor
>
> Currently TableEnv support return sorted views and tables.
> But other operations such as show databases without sorted function. I think 
> we can add support for these operations. So when returned results are large, 
> user can locate result easily.
> Flink SQL> show databases;
> +--+
> |    database name |
> +--+
> | default_database |
> |             test |
> |              sys |
> |            ca_db |
> +--+
> e.g. mysql
> ++
> | Database           |
> ++
> | information_schema |
> | mysql              |
> | performance_schema |
> | sys                |
> | test               |
> | test_db            |
> ++
> 6 rows in set (0.01 sec)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31574) Cleanup unused private methods in OperationConverterUtils

2023-03-22 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31574:

Description: some private methods in OperationConverterUtils should be 
removed. because called public methods has been removed in FLINK-29585  (was: 
some private methods in OperationConverterUtils should be removed. because 
caller expose methods has been removed in 
[FLINK-29585|https://issues.apache.org/jira/browse/FLINK-29585])

> Cleanup unused private methods in OperationConverterUtils
> -
>
> Key: FLINK-31574
> URL: https://issues.apache.org/jira/browse/FLINK-31574
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0, 1.17.1
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> some private methods in OperationConverterUtils should be removed. because 
> called public methods has been removed in FLINK-29585



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31574) Cleanup unused private methods in OperationConverterUtils

2023-03-22 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31574:

Affects Version/s: 1.17.1

> Cleanup unused private methods in OperationConverterUtils
> -
>
> Key: FLINK-31574
> URL: https://issues.apache.org/jira/browse/FLINK-31574
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0, 1.17.1
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> some private methods in OperationConverterUtils should be removed. because 
> caller expose methods has been removed in 
> [FLINK-29585|https://issues.apache.org/jira/browse/FLINK-29585]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31574) Cleanup unused private methods in OperationConverterUtils

2023-03-22 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17703633#comment-17703633
 ] 

Ran Tao commented on FLINK-31574:
-

[~luoyuxia] can you help to take a look?

> Cleanup unused private methods in OperationConverterUtils
> -
>
> Key: FLINK-31574
> URL: https://issues.apache.org/jira/browse/FLINK-31574
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> some private methods in OperationConverterUtils should be removed. because 
> caller expose methods has been removed in 
> [FLINK-29585|https://issues.apache.org/jira/browse/FLINK-29585]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31574) Cleanup unused private methods in OperationConverterUtils

2023-03-22 Thread Ran Tao (Jira)
Ran Tao created FLINK-31574:
---

 Summary: Cleanup unused private methods in OperationConverterUtils
 Key: FLINK-31574
 URL: https://issues.apache.org/jira/browse/FLINK-31574
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.17.0
Reporter: Ran Tao


some private methods in OperationConverterUtils should be removed. because 
caller expose methods has been removed in 
[FLINK-29585|https://issues.apache.org/jira/browse/FLINK-29585]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31464) Move SqlNode conversion logic out from SqlToOperationConverter

2023-03-20 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17702669#comment-17702669
 ] 

Ran Tao commented on FLINK-31464:
-

got it. thanks for explanations.

> Move SqlNode conversion logic out from SqlToOperationConverter
> --
>
> Key: FLINK-31464
> URL: https://issues.apache.org/jira/browse/FLINK-31464
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner
>Reporter: luoyuxia
>Assignee: xuzhiwen
>Priority: Major
>
> Similar to FLINK-31368, the  `SqlToOperationConverter` is a bit bloated. We 
> can refactor it to avoid the code length for this class grow quickly.
> We can follow the idea proposed in FLINK-31368.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31510) Use getMemorySize instead of getMemory

2023-03-20 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17702568#comment-17702568
 ] 

Ran Tao edited comment on FLINK-31510 at 3/20/23 10:04 AM:
---

[~slfan1989] thanks for explanations. got it.


was (Author: lemonjing):
[~slfan1989] thanks for explantations. got it.

> Use getMemorySize instead of getMemory
> --
>
> Key: FLINK-31510
> URL: https://issues.apache.org/jira/browse/FLINK-31510
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.18.0
>Reporter: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>
> In YARN-4844, use getMemorySize instead of getMemory, because using int to 
> represent memory may exceed the bounds in some cases and produce negative 
> numbers.
> This change was merged in HADOOP-2.8.0, we should use getMemorySize instead 
> of getMemory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31510) Use getMemorySize instead of getMemory

2023-03-20 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17702568#comment-17702568
 ] 

Ran Tao commented on FLINK-31510:
-

[~slfan1989] thanks for explantations. got it.

> Use getMemorySize instead of getMemory
> --
>
> Key: FLINK-31510
> URL: https://issues.apache.org/jira/browse/FLINK-31510
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.18.0
>Reporter: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>
> In YARN-4844, use getMemorySize instead of getMemory, because using int to 
> represent memory may exceed the bounds in some cases and produce negative 
> numbers.
> This change was merged in HADOOP-2.8.0, we should use getMemorySize instead 
> of getMemory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31464) Move SqlNode conversion logic out from SqlToOperationConverter

2023-03-20 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17702535#comment-17702535
 ] 

Ran Tao commented on FLINK-31464:
-

[~twalthr] i think currently we may need CatalogManager in context to convert 
unresolved identifiers to resolved identifiers and pass to flink POJO classes, 
another usage is to get ResovledTable for converting some SqlAlterTables (may 
need table informations). 

> Move SqlNode conversion logic out from SqlToOperationConverter
> --
>
> Key: FLINK-31464
> URL: https://issues.apache.org/jira/browse/FLINK-31464
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner
>Reporter: luoyuxia
>Assignee: xuzhiwen
>Priority: Major
>
> Similar to FLINK-31368, the  `SqlToOperationConverter` is a bit bloated. We 
> can refactor it to avoid the code length for this class grow quickly.
> We can follow the idea proposed in FLINK-31368.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31500) Move SqlAlterTableSchema conversion logic to AlterTableSchemaConverter

2023-03-19 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17702436#comment-17702436
 ] 

Ran Tao commented on FLINK-31500:
-

[~jark] hi, jark. i'm glad to fix this. it will help to learn this new 
conversion way. Can u assign this ticket to me? i will try to support it.

> Move SqlAlterTableSchema conversion logic to AlterTableSchemaConverter
> --
>
> Key: FLINK-31500
> URL: https://issues.apache.org/jira/browse/FLINK-31500
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
>
> Introduce {{AlterTableSchemaConverter}} and move the conversion logic of 
> SqlAlterTableSchema -> AlterTableChangeOperation to it. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31510) Use getMemorySize instead of getMemory

2023-03-19 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17702426#comment-17702426
 ] 

Ran Tao commented on FLINK-31510:
-

Hi. [~slfan1989] because the memory unit you explained is MB. so we can use 
int, why must long ?

> Use getMemorySize instead of getMemory
> --
>
> Key: FLINK-31510
> URL: https://issues.apache.org/jira/browse/FLINK-31510
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.18.0
>Reporter: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>
> In YARN-4844, use getMemorySize instead of getMemory, because using int to 
> represent memory may exceed the bounds in some cases and produce negative 
> numbers.
> This change was merged in HADOOP-2.8.0, we should use getMemorySize instead 
> of getMemory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-30274) Upgrade commons-collections 3.x to commons-collections4

2023-03-19 Thread Ran Tao (Jira)


[ https://issues.apache.org/jira/browse/FLINK-30274 ]


Ran Tao deleted comment on FLINK-30274:
-

was (Author: lemonjing):
[~martijnvisser] Hi, Martijn, Can we push this issue forward? PTAL. 
[https://github.com/apache/flink/pull/21442] thanks.

> Upgrade commons-collections 3.x to commons-collections4
> ---
>
> Key: FLINK-30274
> URL: https://issues.apache.org/jira/browse/FLINK-30274
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-12-02-16-40-22-172.png
>
>
> First, Apache commons-collections 3.x is a Java 1.3 compatible version, and 
> it does not use Java 5 generics. Apache commons-collections4 4.4 is an 
> upgraded version of commons-collections and it built by Java 8.
> The Apache Spark has same issue: [https://github.com/apache/spark/pull/35257]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-31380) Support enhanced show catalogs syntax

2023-03-17 Thread Ran Tao (Jira)


[ https://issues.apache.org/jira/browse/FLINK-31380 ]


Ran Tao deleted comment on FLINK-31380:
-

was (Author: lemonjing):
Hi, [~jark] . Currently there are three ways to support these auxiliary sql 
statements.
1.change every operation to support like/ilike or from/in, but future syntax we 
need to continue change them. (one-by-one way).

2.add a common abstract class to do this, but future change we need to change 
abstract class either.(abstract class way)

3.decorator model, add concrete decorator to support a certain syntax. Its 
benefits are very scalable and loosely coupled. currently the pr was 
implemented in this way. (decorators)

I'm wondering if I've overcomplicated the problem(because these classes are not 
@Public). WDYT?

Sorry, this is not discussed in FLIP. FLIP was the first way before, but I 
found that most of the work was repeated in the implementation.

> Support enhanced show catalogs syntax
> -
>
> Key: FLINK-31380
> URL: https://issues.apache.org/jira/browse/FLINK-31380
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> As FLIP discussed. We will support new syntax for some show operations.
> To avoid bloat, this ticket supports ShowCatalogs.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31380) Support enhanced show catalogs syntax

2023-03-17 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17701912#comment-17701912
 ] 

Ran Tao edited comment on FLINK-31380 at 3/17/23 5:26 PM:
--

Hi, [~jark] . Currently there are three ways to support these auxiliary sql 
statements.
1.change every operation to support like/ilike or from/in, but future syntax we 
need to continue change them. (one-by-one way).

2.add a common abstract class to do this, but future change we need to change 
abstract class either.(abstract class way)

3.decorator model, add concrete decorator to support a certain syntax. Its 
benefits are very scalable and loosely coupled. currently the pr was 
implemented in this way. (decorators)

I'm wondering if I've overcomplicated the problem(because these classes are not 
@Public). WDYT?

Sorry, this is not discussed in FLIP. FLIP was the first way before, but I 
found that most of the work was repeated in the implementation.


was (Author: lemonjing):
Hi, [~jark] . Currently there are three ways to support these auxiliary sql 
statements.
1.change every operation to support like/ilike or from/in, but future syntax we 
need to continue change them. (one-by-one way).

2.add a common abstract class to do this, but future change we need to change 
abstract class either.(abstract class way)

3.decorator model, add concrete decorator to support a certain syntax. Its 
benefits are very scalable and loosely coupled. currently the pr was 
implemented in this way. (decorators)

I'm wondering if I've overcomplicated the problem(because these classes are not 
@Public). WDYT?

> Support enhanced show catalogs syntax
> -
>
> Key: FLINK-31380
> URL: https://issues.apache.org/jira/browse/FLINK-31380
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> As FLIP discussed. We will support new syntax for some show operations.
> To avoid bloat, this ticket supports ShowCatalogs.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31380) Support enhanced show catalogs syntax

2023-03-17 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17701912#comment-17701912
 ] 

Ran Tao edited comment on FLINK-31380 at 3/17/23 5:24 PM:
--

Hi, [~jark] . Currently there are three ways to support these auxiliary sql 
statements.
1.change every operation to support like/ilike or from/in, but future syntax we 
need to continue change them. (one-by-one way).

2.add a common abstract class to do this, but future change we need to change 
abstract class either.(abstract class way)

3.decorator model, add concrete decorator to support a certain syntax. Its 
benefits are very scalable and loosely coupled. currently the pr was 
implemented in this way. (decorators)

I'm wondering if I've overcomplicated the problem(because these classes are not 
@Public). WDYT?


was (Author: lemonjing):
Hi, [~jark] . Currently there are three ways to support these auxiliary sql 
statements.
1.change every operation to support like/ilike or from/in, but future syntax we 
need to continue change them. (one-by-one way).
2.add a common abstract class to do this, but future change we need to change 
abstract class either.(abstract class way)

3.decorator model, add concrete decorator to support a certain syntax. Its 
benefits are very scalable and loosely coupled. currently the pr was 
implemented in this way.

I'm wondering if I've overcomplicated the problem(because these classes are not 
@Public). WDYT?

> Support enhanced show catalogs syntax
> -
>
> Key: FLINK-31380
> URL: https://issues.apache.org/jira/browse/FLINK-31380
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> As FLIP discussed. We will support new syntax for some show operations.
> To avoid bloat, this ticket supports ShowCatalogs.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31380) Support enhanced show catalogs syntax

2023-03-17 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17701912#comment-17701912
 ] 

Ran Tao commented on FLINK-31380:
-

Hi, [~jark] . Currently there are three ways to support these auxiliary sql 
statements.
1.change every operation to support like/ilike or from/in, but future syntax we 
need to continue change them. (one-by-one way).
2.add a common abstract class to do this, but future change we need to change 
abstract class either.(abstract class way)

3.decorator model, add concrete decorator to support a certain syntax. Its 
benefits are very scalable and loosely coupled. currently the pr was 
implemented in this way.

I'm wondering if I've overcomplicated the problem(because these classes are not 
@Public). WDYT?

> Support enhanced show catalogs syntax
> -
>
> Key: FLINK-31380
> URL: https://issues.apache.org/jira/browse/FLINK-31380
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>  Labels: pull-request-available
>
> As FLIP discussed. We will support new syntax for some show operations.
> To avoid bloat, this ticket supports ShowCatalogs.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31319) Kafka new source partitionDiscoveryIntervalMs=0 cause bounded source can not quit

2023-03-16 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17701462#comment-17701462
 ] 

Ran Tao commented on FLINK-31319:
-

[~ramanverma] thanks.
BP-1.16: [https://github.com/apache/flink/pull/22192]
BP-1.17: [https://github.com/apache/flink/pull/22193]
original: [https://github.com/apache/flink-connector-kafka/pull/8]

> Kafka new source partitionDiscoveryIntervalMs=0 cause bounded source can not 
> quit
> -
>
> Key: FLINK-31319
> URL: https://issues.apache.org/jira/browse/FLINK-31319
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.17.0, 1.16.1
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2023-03-04-01-37-29-360.png, 
> image-2023-03-04-01-39-20-352.png, image-2023-03-04-01-40-44-124.png, 
> image-2023-03-04-01-41-55-664.png
>
>
> As kafka option description, partitionDiscoveryIntervalMs <=0 means disabled.
> !image-2023-03-04-01-37-29-360.png|width=781,height=147!
> just like start kafka enumerator:
> !image-2023-03-04-01-39-20-352.png|width=465,height=311!
> but inner 
> handlePartitionSplitChanges use error if condition( < 0):
> !image-2023-03-04-01-40-44-124.png|width=576,height=237!
>  
> it will cause noMoreNewPartitionSplits can not be set to true. 
> !image-2023-03-04-01-41-55-664.png|width=522,height=610!
> Finally cause bounded source can not signalNoMoreSplits, so it will not quit.
> Besides,Both ends of the if condition should be mutually exclusive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31491) Flink table planner NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in branch release1.16

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31491:

Summary: Flink table planner NestedLoopJoinTest.testLeftOuterJoinWithFilter 
failed in branch release1.16  (was: table planner 
NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in branch release1.16)

> Flink table planner NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in 
> branch release1.16
> ---
>
> Key: FLINK-31491
> URL: https://issues.apache.org/jira/browse/FLINK-31491
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>
> branch release1.16 NestedLoopJoinTest.testLeftOuterJoinWithFilter failed.
> Mar 16 12:30:00 [ERROR] Failures: 
> Mar 16 12:30:00 [ERROR]   NestedLoopJoinTest.testLeftOuterJoinWithFilter1:37 
> optimized exec plan expected:<...[InnerJoin], where=[[true], select=[a, e, 
> f], build=[left])
> Mar 16 12:30:00    :- Exchange(distribution=[broadcast])
> Mar 16 12:30:00    :  +- Calc(select=[a], where=[(a = 10)])
> Mar 16 12:30:00    :     +- LegacyTableSourceScan(table=[[default_catalog, 
> default_database, MyTable1, source: [TestTableSource(a, b, c)]]], fields=[a, 
> b, c])
> Mar 16 12:30:00    +- Calc(select=[e, f], where=[(d = 10])])
> Mar 16 12:30:00       +- LegacyT...> but was:<...[InnerJoin], where=[[(a = 
> d)], select=[a, d, e, f], build=[left])
> Mar 16 12:30:00    :- Exchange(distribution=[broadcast])
> Mar 16 12:30:00    :  +- Calc(select=[a], where=[SEARCH(a, Sarg[10])])
> Mar 16 12:30:00    :     +- LegacyTableSourceScan(table=[[default_catalog, 
> default_database, MyTable1, source: [TestTableSource(a, b, c)]]], fields=[a, 
> b, c])
> Mar 16 12:30:00    +- Calc(select=[d, e, f], where=[SEARCH(d, Sarg[10]])])
> Mar 16 12:30:00       +- LegacyT...>
> at 
> org.apache.flink.table.planner.utils.DiffRepository.assertEquals(DiffRepository.java:438)
>     at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.assertEqualsOrExpand(TableTestBase.scala:1075)
>     at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.assertPlanEquals(TableTestBase.scala:1008)
>     at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyPlan(TableTestBase.scala:849)
>     at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.verifyExecPlan(TableTestBase.scala:636)
>     at 
> org.apache.flink.table.planner.plan.batch.sql.join.NestedLoopJoinTest.testLeftOuterJoinWithFilter1(NestedLoopJoinTest.scala:37)
>  
> azure ci: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47244&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4]
> And it can be reproduced in local. I think the caused commit may be: 
> f0361c720cb18c4ae7dc669c6a5da5b09bc8f563
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31491) table planner NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in branch release1.16

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31491:

Description: 
branch release1.16 NestedLoopJoinTest.testLeftOuterJoinWithFilter failed.

Mar 16 12:30:00 [ERROR] Failures: 
Mar 16 12:30:00 [ERROR]   NestedLoopJoinTest.testLeftOuterJoinWithFilter1:37 
optimized exec plan expected:<...[InnerJoin], where=[[true], select=[a, e, f], 
build=[left])
Mar 16 12:30:00    :- Exchange(distribution=[broadcast])
Mar 16 12:30:00    :  +- Calc(select=[a], where=[(a = 10)])
Mar 16 12:30:00    :     +- LegacyTableSourceScan(table=[[default_catalog, 
default_database, MyTable1, source: [TestTableSource(a, b, c)]]], fields=[a, b, 
c])
Mar 16 12:30:00    +- Calc(select=[e, f], where=[(d = 10])])
Mar 16 12:30:00       +- LegacyT...> but was:<...[InnerJoin], where=[[(a = d)], 
select=[a, d, e, f], build=[left])
Mar 16 12:30:00    :- Exchange(distribution=[broadcast])
Mar 16 12:30:00    :  +- Calc(select=[a], where=[SEARCH(a, Sarg[10])])
Mar 16 12:30:00    :     +- LegacyTableSourceScan(table=[[default_catalog, 
default_database, MyTable1, source: [TestTableSource(a, b, c)]]], fields=[a, b, 
c])
Mar 16 12:30:00    +- Calc(select=[d, e, f], where=[SEARCH(d, Sarg[10]])])
Mar 16 12:30:00       +- LegacyT...>

at 
org.apache.flink.table.planner.utils.DiffRepository.assertEquals(DiffRepository.java:438)
    at 
org.apache.flink.table.planner.utils.TableTestUtilBase.assertEqualsOrExpand(TableTestBase.scala:1075)
    at 
org.apache.flink.table.planner.utils.TableTestUtilBase.assertPlanEquals(TableTestBase.scala:1008)
    at 
org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyPlan(TableTestBase.scala:849)
    at 
org.apache.flink.table.planner.utils.TableTestUtilBase.verifyExecPlan(TableTestBase.scala:636)
    at 
org.apache.flink.table.planner.plan.batch.sql.join.NestedLoopJoinTest.testLeftOuterJoinWithFilter1(NestedLoopJoinTest.scala:37)

 

azure ci: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47244&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4]

And it can be reproduced in local. I think the caused commit may be: 
f0361c720cb18c4ae7dc669c6a5da5b09bc8f563

 

  was:
branch release1.16 NestedLoopJoinTest.testLeftOuterJoinWithFilter failed.

```

/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/bin/java -ea 
-DforkNumber=01 -Dmvn.forkNumber=$1 -Dhadoop.version=2.8.5 
-Dcheckpointing.randomization=true -Dbuffer-debloat.randomization=true 
-Duser.country=US -Duser.language=en -Dcheckpointing.changelog=random 
-Dproject.basedir=/Users/chucheng/GitHub/flink/flink-table/flink-table-planner 
-Dtest.randomization.seed -Djunit.jupiter.extensions.autodetection.enabled=true 
-Djunit.jupiter.execution.parallel.enabled=true 
-Djunit.jupiter.execution.parallel.mode.default=same_thread 
-Djunit.jupiter.execution.parallel.mode.classes.default=same_thread 
-Djunit.jupiter.execution.parallel.config.strategy=dynamic 
-Didea.test.cyclic.buffer.size=1048576 
-javaagent:/Users/chucheng/Library/Application 
Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/223.8836.41/IntelliJ 
IDEA.app/Contents/lib/idea_rt.jar=50748:/Users/chucheng/Library/Application 
Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/223.8836.41/IntelliJ 
IDEA.app/Contents/bin -Dfile.encoding=UTF-8 -classpath 
/Users/chucheng/.m2/repository/org/junit/platform/junit-platform-launcher/1.8.1/junit-platform-launcher-1.8.1.jar:/Users/chucheng/Library/Application
 Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/223.8836.41/IntelliJ 
IDEA.app/Contents/lib/idea_rt.jar:/Users/chucheng/Library/Application 
Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/223.8836.41/IntelliJ 
IDEA.app/Contents/plugins/junit/lib/junit5-rt.jar:/Users/chucheng/Library/Application
 Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/223.8836.41/IntelliJ 
IDEA.app/Contents/plugins/junit/lib/junit-rt.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/cldrdata.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/dnsns.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/jaccess.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/localedata.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/nashorn.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/sunec.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/zipfs.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines

[jira] [Updated] (FLINK-31491) table planner NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in branch release1.16

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31491:

Description: 
branch release1.16 NestedLoopJoinTest.testLeftOuterJoinWithFilter failed.

```

/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/bin/java -ea 
-DforkNumber=01 -Dmvn.forkNumber=$1 -Dhadoop.version=2.8.5 
-Dcheckpointing.randomization=true -Dbuffer-debloat.randomization=true 
-Duser.country=US -Duser.language=en -Dcheckpointing.changelog=random 
-Dproject.basedir=/Users/chucheng/GitHub/flink/flink-table/flink-table-planner 
-Dtest.randomization.seed -Djunit.jupiter.extensions.autodetection.enabled=true 
-Djunit.jupiter.execution.parallel.enabled=true 
-Djunit.jupiter.execution.parallel.mode.default=same_thread 
-Djunit.jupiter.execution.parallel.mode.classes.default=same_thread 
-Djunit.jupiter.execution.parallel.config.strategy=dynamic 
-Didea.test.cyclic.buffer.size=1048576 
-javaagent:/Users/chucheng/Library/Application 
Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/223.8836.41/IntelliJ 
IDEA.app/Contents/lib/idea_rt.jar=50748:/Users/chucheng/Library/Application 
Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/223.8836.41/IntelliJ 
IDEA.app/Contents/bin -Dfile.encoding=UTF-8 -classpath 
/Users/chucheng/.m2/repository/org/junit/platform/junit-platform-launcher/1.8.1/junit-platform-launcher-1.8.1.jar:/Users/chucheng/Library/Application
 Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/223.8836.41/IntelliJ 
IDEA.app/Contents/lib/idea_rt.jar:/Users/chucheng/Library/Application 
Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/223.8836.41/IntelliJ 
IDEA.app/Contents/plugins/junit/lib/junit5-rt.jar:/Users/chucheng/Library/Application
 Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/223.8836.41/IntelliJ 
IDEA.app/Contents/plugins/junit/lib/junit-rt.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/cldrdata.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/dnsns.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/jaccess.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/localedata.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/nashorn.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/sunec.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/ext/zipfs.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/management-agent.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/rt.jar:/Users/chucheng/GitHub/flink/flink-table/flink-table-planner/target/test-classes:/Users/chucheng/GitHub/flink/flink-table/flink-table-planner/target/classes:/Users/chucheng/.m2/repository/com/google/guava/guava/29.0-jre/guava-29.0-jre.jar:/Users/chucheng/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/Users/chucheng/.m2/repository/com/google/guava/listenablefuture/.0-empty-to-avoid-conflict-with-guava/listenablefuture-.0-empty-to-avoid-conflict-with-guava.jar:/Users/chucheng/.m2/repository/org/checkerframework/checker-qual/2.11.1/checker-qual-2.11.1.jar:/Users/chucheng/.m2/repository/com/google/errorprone/error_prone_annotations/2.3.4/error_prone_annotations-2.3.4.jar:/Users/chucheng/.m2/repository/com/google/j2objc/j2objc-annotations/1.3/j2objc-annotations-1.3.jar:/Users/chucheng/.m2/repository/org/codehaus/janino/commons-compiler/3.0.11/commons-compiler-3.0.11.jar:/Users/chucheng/.m2/repository/org/codehaus/janino/janino/3.0.11/janino-3.0.11.jar:/Users/chucheng/GitHub/flink/flink-table/flink-table-api-java-bridge/target/classes:/Users/chucheng/GitHub/flink/flink-table/flink-table-api-java/target/classes:/Users/chucheng/GitHub/flink/flink-table/flink-table-api-bridge-base/target/classes:/Users/chucheng/GitHub/flink/flink-java/target/classes:/Users/chucheng/.m2/repository/com/twitter/chill-java/0.7.6/chill-java-0.7.6.jar:/Users/chucheng/GitHub/flink/flink-streaming-java/target/classes:/Users/chucheng/GitHub/flink/flink-scala/target/classes:/Users/chucheng/GitHub/flink/flink-core/target/classes:/Users/chucheng/.m2/repository/org/apache/flink/flink-shaded-asm-9/9.2-15.0/flink-shaded-asm-9-9.2-15.0.jar:/Users/chucheng/.m2/repositor

[jira] [Updated] (FLINK-31491) table planner NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in branch release1.16

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31491:

Environment: (was: branch release1.16 
NestedLoopJoinTest.testLeftOuterJoinWithFilter failed. azure ci: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47244&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4]

And it can be reproduced in local. I think the caused commit may be: 
f0361c720cb18c4ae7dc669c6a5da5b09bc8f563

 )

> table planner NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in branch 
> release1.16
> -
>
> Key: FLINK-31491
> URL: https://issues.apache.org/jira/browse/FLINK-31491
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31491) table planner NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in branch release1.16

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31491:

Description: 
branch release1.16 NestedLoopJoinTest.testLeftOuterJoinWithFilter failed. 

Mar 16 12:30:00 [ERROR] Failures: 
Mar 16 12:30:00 [ERROR]   NestedLoopJoinTest.testLeftOuterJoinWithFilter1:37 
optimized exec plan expected:<...[InnerJoin], where=[[true], select=[a, e, f], 
build=[left])
Mar 16 12:30:00    :- Exchange(distribution=[broadcast])
Mar 16 12:30:00    :  +- Calc(select=[a], where=[(a = 10)])
Mar 16 12:30:00    :     +- LegacyTableSourceScan(table=[[default_catalog, 
default_database, MyTable1, source: [TestTableSource(a, b, c)]]], fields=[a, b, 
c])
Mar 16 12:30:00    +- Calc(select=[e, f], where=[(d = 10])])
Mar 16 12:30:00       +- LegacyT...> but was:<...[InnerJoin], where=[[(a = d)], 
select=[a, d, e, f], build=[left])
Mar 16 12:30:00    :- Exchange(distribution=[broadcast])
Mar 16 12:30:00    :  +- Calc(select=[a], where=[SEARCH(a, Sarg[10])])
Mar 16 12:30:00    :     +- LegacyTableSourceScan(table=[[default_catalog, 
default_database, MyTable1, source: [TestTableSource(a, b, c)]]], fields=[a, b, 
c])
Mar 16 12:30:00    +- Calc(select=[d, e, f], where=[SEARCH(d, Sarg[10]])])
Mar 16 12:30:00       +- LegacyT...>

azure ci: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47244&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4]

And it can be reproduced in local. I think the caused commit may be: 
f0361c720cb18c4ae7dc669c6a5da5b09bc8f563

 

> table planner NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in branch 
> release1.16
> -
>
> Key: FLINK-31491
> URL: https://issues.apache.org/jira/browse/FLINK-31491
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>
> branch release1.16 NestedLoopJoinTest.testLeftOuterJoinWithFilter failed. 
> Mar 16 12:30:00 [ERROR] Failures: 
> Mar 16 12:30:00 [ERROR]   NestedLoopJoinTest.testLeftOuterJoinWithFilter1:37 
> optimized exec plan expected:<...[InnerJoin], where=[[true], select=[a, e, 
> f], build=[left])
> Mar 16 12:30:00    :- Exchange(distribution=[broadcast])
> Mar 16 12:30:00    :  +- Calc(select=[a], where=[(a = 10)])
> Mar 16 12:30:00    :     +- LegacyTableSourceScan(table=[[default_catalog, 
> default_database, MyTable1, source: [TestTableSource(a, b, c)]]], fields=[a, 
> b, c])
> Mar 16 12:30:00    +- Calc(select=[e, f], where=[(d = 10])])
> Mar 16 12:30:00       +- LegacyT...> but was:<...[InnerJoin], where=[[(a = 
> d)], select=[a, d, e, f], build=[left])
> Mar 16 12:30:00    :- Exchange(distribution=[broadcast])
> Mar 16 12:30:00    :  +- Calc(select=[a], where=[SEARCH(a, Sarg[10])])
> Mar 16 12:30:00    :     +- LegacyTableSourceScan(table=[[default_catalog, 
> default_database, MyTable1, source: [TestTableSource(a, b, c)]]], fields=[a, 
> b, c])
> Mar 16 12:30:00    +- Calc(select=[d, e, f], where=[SEARCH(d, Sarg[10]])])
> Mar 16 12:30:00       +- LegacyT...>
> azure ci: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47244&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4]
> And it can be reproduced in local. I think the caused commit may be: 
> f0361c720cb18c4ae7dc669c6a5da5b09bc8f563
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31491) table planner NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in branch release1.16

2023-03-16 Thread Ran Tao (Jira)
Ran Tao created FLINK-31491:
---

 Summary: table planner 
NestedLoopJoinTest.testLeftOuterJoinWithFilter failed in branch release1.16
 Key: FLINK-31491
 URL: https://issues.apache.org/jira/browse/FLINK-31491
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
 Environment: branch release1.16 
NestedLoopJoinTest.testLeftOuterJoinWithFilter failed. azure ci: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47244&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4]

And it can be reproduced in local. I think the caused commit may be: 
f0361c720cb18c4ae7dc669c6a5da5b09bc8f563

 
Reporter: Ran Tao






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31319) Kafka new source partitionDiscoveryIntervalMs=0 cause bounded source can not quit

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31319:

Summary: Kafka new source partitionDiscoveryIntervalMs=0 cause bounded 
source can not quit  (was: Inconsistent condition judgement about kafka 
partitionDiscoveryIntervalMs cause potential bug)

> Kafka new source partitionDiscoveryIntervalMs=0 cause bounded source can not 
> quit
> -
>
> Key: FLINK-31319
> URL: https://issues.apache.org/jira/browse/FLINK-31319
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.17.0, 1.16.1, 1.16.2
>Reporter: Ran Tao
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2023-03-04-01-37-29-360.png, 
> image-2023-03-04-01-39-20-352.png, image-2023-03-04-01-40-44-124.png, 
> image-2023-03-04-01-41-55-664.png
>
>
> As kafka option description, partitionDiscoveryIntervalMs <=0 means disabled.
> !image-2023-03-04-01-37-29-360.png|width=781,height=147!
> just like start kafka enumerator:
> !image-2023-03-04-01-39-20-352.png|width=465,height=311!
> but inner 
> handlePartitionSplitChanges use error if condition( < 0):
> !image-2023-03-04-01-40-44-124.png|width=576,height=237!
>  
> it will cause noMoreNewPartitionSplits can not be set to true. 
> !image-2023-03-04-01-41-55-664.png|width=522,height=610!
> Finally cause bounded source can not signalNoMoreSplits, so it will not quit.
> Besides,Both ends of the if condition should be mutually exclusive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31473) Add new show operations docs

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31473:

Description: Add enhanced show sql syntax docs.

> Add new show operations docs
> 
>
> Key: FLINK-31473
> URL: https://issues.apache.org/jira/browse/FLINK-31473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Ran Tao
>Priority: Major
>
> Add enhanced show sql syntax docs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31473) Add new show operations docs

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31473:

Priority: Minor  (was: Major)

> Add new show operations docs
> 
>
> Key: FLINK-31473
> URL: https://issues.apache.org/jira/browse/FLINK-31473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Ran Tao
>Priority: Minor
>
> Add enhanced show sql syntax docs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31473) Add new show operations docs

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31473:

Description: docs will be added in each single ticket. so close this issue.

> Add new show operations docs
> 
>
> Key: FLINK-31473
> URL: https://issues.apache.org/jira/browse/FLINK-31473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Ran Tao
>Priority: Major
>
> docs will be added in each single ticket. so close this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31473) Add new show operations docs

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31473:

Description: (was: docs will be added in each single ticket. so close 
this issue.)

> Add new show operations docs
> 
>
> Key: FLINK-31473
> URL: https://issues.apache.org/jira/browse/FLINK-31473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Ran Tao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31473) Add new show operations docs

2023-03-16 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17701039#comment-17701039
 ] 

Ran Tao commented on FLINK-31473:
-

docs will be added in each single ticket. so close this issue.

> Add new show operations docs
> 
>
> Key: FLINK-31473
> URL: https://issues.apache.org/jira/browse/FLINK-31473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Ran Tao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31380) Support enhanced show catalogs syntax

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31380:

Summary: Support enhanced show catalogs syntax  (was: Add filter support 
for ShowCatalogs)

> Support enhanced show catalogs syntax
> -
>
> Key: FLINK-31380
> URL: https://issues.apache.org/jira/browse/FLINK-31380
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>
> As FLIP discussed. We will support new syntax for some show operations.
> To avoid bloat, this ticket supports ShowCatalogs.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31481) Support enhanced show databases syntax

2023-03-16 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31481:

Summary: Support enhanced show databases syntax  (was: Add filter support 
for ShowDatabases)

> Support enhanced show databases syntax
> --
>
> Key: FLINK-31481
> URL: https://issues.apache.org/jira/browse/FLINK-31481
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Priority: Major
>
> As FLIP discussed. To avoid bloat, this ticket supports ShowDatabases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31481) Add filter support for ShowDatabases

2023-03-15 Thread Ran Tao (Jira)
Ran Tao created FLINK-31481:
---

 Summary: Add filter support for ShowDatabases
 Key: FLINK-31481
 URL: https://issues.apache.org/jira/browse/FLINK-31481
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API, Table SQL / Planner
Reporter: Ran Tao


As FLIP discussed. To avoid bloat, this ticket supports ShowDatabases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31473) Add new show operations docs

2023-03-15 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao closed FLINK-31473.
---
Resolution: Invalid

> Add new show operations docs
> 
>
> Key: FLINK-31473
> URL: https://issues.apache.org/jira/browse/FLINK-31473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Ran Tao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31380) Add filter support for ShowCatalogs

2023-03-15 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31380:

Summary: Add filter support for ShowCatalogs  (was: Add filter support for 
ShowCatalog)

> Add filter support for ShowCatalogs
> ---
>
> Key: FLINK-31380
> URL: https://issues.apache.org/jira/browse/FLINK-31380
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>
> As FLIP discussed. We will support new syntax for some show operations. To 
> avoid bloat, this ticket supports ShowCatalogs and ShowDatabases.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31380) Add filter support for ShowCatalogs

2023-03-15 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31380:

Description: 
As FLIP discussed. We will support new syntax for some show operations.

To avoid bloat, this ticket supports ShowCatalogs.
 

  was:
As FLIP discussed. We will support new syntax for some show operations. To 
avoid bloat, this ticket supports ShowCatalogs and ShowDatabases.
 


> Add filter support for ShowCatalogs
> ---
>
> Key: FLINK-31380
> URL: https://issues.apache.org/jira/browse/FLINK-31380
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>
> As FLIP discussed. We will support new syntax for some show operations.
> To avoid bloat, this ticket supports ShowCatalogs.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31380) Add filter support for ShowCatalog and ShowDatabases

2023-03-15 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17700498#comment-17700498
 ] 

Ran Tao edited comment on FLINK-31380 at 3/16/23 3:57 AM:
--

show catalogs;
+-+
|   catalog name|

+-+
|       catalog1|
|       catalog2|
|default_catalog|

+-+
3 rows in set

show catalogs like '%log1';
– show catalogs ilike '%log1';
– show catalogs ilike '%LOG1';
+--+
|catalog name|

+--+
|    catalog1|

+--+
1 row in set


was (Author: lemonjing):
show catalogs;
+-+
|   catalog name|

+-+
|       catalog1|
|       catalog2|
|default_catalog|

+-+
3 rows in set

show catalogs like '%log1';
– show catalogs ilike '%log1';
– show catalogs ilike '%LOG1';
+--+
|catalog name|

+--+
|    catalog1|

+--+
1 row in set

show databases;
+---+
|database name|

+---+
|      default|
|          db1|
|          db2|

+---+
3 rows in set

show databases like 'default%';
– show databases ilike 'default%';
– show databases not like '%db1';
+---+
|database name|

+---+
|      default|

+---+
1 row in set

show databases from catalog1;
– show databases in catalog1;
+---+
|database name|

+---+
|      default|
|          db1|
|          db2|

+---+
3 rows in set

show databases from catalog1 like '%db1';
– show databases in catalog1 like '%db1';
– show databases in catalog1 ilike '%db1';
+---+
|database name|

+---+
|          db1|

+---+
1 row in set

> Add filter support for ShowCatalog and ShowDatabases
> 
>
> Key: FLINK-31380
> URL: https://issues.apache.org/jira/browse/FLINK-31380
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>
> As FLIP discussed. We will support new syntax for some show operations. To 
> avoid bloat, this ticket supports ShowCatalogs and ShowDatabases.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31380) Add filter support for ShowCatalog

2023-03-15 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31380:

Summary: Add filter support for ShowCatalog  (was: Add filter support for 
ShowCatalog and ShowDatabases)

> Add filter support for ShowCatalog
> --
>
> Key: FLINK-31380
> URL: https://issues.apache.org/jira/browse/FLINK-31380
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Ran Tao
>Priority: Major
>
> As FLIP discussed. We will support new syntax for some show operations. To 
> avoid bloat, this ticket supports ShowCatalogs and ShowDatabases.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2023-03-15 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17700929#comment-17700929
 ] 

Ran Tao commented on FLINK-31472:
-

Yes. I've found this failure to occur occasionally, not consistently. Might be 
a problem with local computer..

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1
>Reporter: Ran Tao
>Priority: Major
>
> when run mvn clean test, this case failed occasionally.
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExe

  1   2   3   4   >